Music production
fromFast Company
2 days agoThe future of music is human-generated
The music industry's value is shifting from songs to the human connection behind performances as AI-generated music becomes abundant.
The MPC Sample features a four-by-four grid of 16 RGB backlit pads that respond to pressure, allowing for dynamic sound manipulation. A full-color display and three knobs provide hands-on control over effects and sound.
Galen Buckwalter, a 69-year-old research psychologist and quadriplegic, participated in a brain implant study to contribute to science that aids those with paralysis. The six chips in his brain decode movement intention, allowing him to operate a computer and feel sensations in his fingers again.
Brief videos generated by the model, including a clip featuring Tom Cruise fighting Brad Pitt, soon went viral and drew intense criticism from Hollywood. While one successful screenwriter declared that the footage meant, "It's likely over for us," studios quickly sent ByteDance a flurry of cease-and-desist letters, with Disney's lawyers accusing the company of a "virtual smash-and-grab of Disney's IP."
By the early 1900s, player pianos had evolved to more fully reproduce a human performance, including subtle dynamics like tempo changes and the introduction of a damper pedal. The human role went from deskilled to fully deprecated as electric motors replaced foot-powered bellows. With the Seeburg Lilliputian Model L, the only job left for humans who wanted to play the piano in the 1920s was to put in a coin.
The unit can run on three AA batteries (a set is included) or on the included USB-A to DC adapter (you'll need your own wall charger). The included instruction manual helps you make sense of what the heck all the knobs, levers, buttons, and lights mean.
The vocoder was never supposed to be a revolution in music. Its development began a century ago, when an engineer at Bell Labs was looking for a simpler way to send phone calls across copper telephone lines.
If you've ever used tools like PhonicMind or LALAL.AI, you know the drill: Upload your MP3. Wait in a queue. Pay for "credits" or high-quality downloads. Your file sits on someone else's server. For musicians, producers, or just karaoke fans, this is slow and privacy-invasive.
I imagined it would require technical skill-like some sort of advanced prompt engineering where I'd need to specify exactly how each file interacted with every other file. I thought I'd need to understand the "rules" of combining images with audio, or know the exact syntax for referencing multiple inputs. The reality was much simpler. Multi-modal input just means you can throw different types of files at Seedance 2.0 and tell the model
Junho Park's graduation concept borrows all the right cues from TE's playbook, that modular control layout, the single bold color, the mix of knobs and buttons that practically beg to be touched, but redirects them toward a gap in the market. Where Teenage Engineering designs for people who already understand synthesis and sampling, the T.M-4 targets people who have ideas but no vocabulary to express them.
But to anyone tracking the data over the past few years, it was inevitable. In 2022, Bad Bunny's Un Verano Sin Ti redefined the market, driving Latin music's streaming growth to new heights. It later became the first Spanish-language album nominated for Grammy Album of the Year. The takeaway is simple: When you have accurate, real-time data, you don't guess where culture is going, you know.
The Phase8 uses a new form of "acoustic synthesis" that combines acoustic sound generation with electronic control. Takahashi says the synthesizer is "beyond analog vs. digital" and "beyond electronics" altogether. It features chromatically tuned steel resonators, which creates an acoustic sound similar to that of a kalimba. These signals can be manipulated via onboard effects and sequenced like a traditional synthesizer. Here's a video of the synth in action.
Bandcamp has announced it will no longer allow AI-generated music to be hosted on its platform. In a post shared on Reddit, the company's support team revealed their plans to implement a policy prohibiting "any use of AI tools to impersonate other artists or styles," elaborating more firmly that "music and audio that is generated wholly or in substantial part by AI is not permitted on Bandcamp."
When a scientist feeds a data set into a bot and says "give me hypotheses to test", they are asking the bot to be the creator, not a creative partner. Humans tend to defer to ideas produced by bots, assuming that the bot's knowledge exceeds their own. And, when they do, they end up exploring fewer avenues for possible solutions to their problem.
First of all, it offers four times the processing power of previous MPCs, which is enough to load up to 32 virtual instruments at the same time. This is assisted by a full 16GB of RAM, which is a whole lot in this era of AI tomfoolery. The XL can handle 16 audio tracks simultaneously. In my experience with previous units, this is more than enough for a full song.
Following the strong early traction of vibes in Meta AI, we are testing a standalone app to build on that momentum. We've seen that users are increasingly leaning into the format to create, discover and share AI generated video with friends. This standalone app provides a dedicated home for that experience, offering people a more focused and immersive environment.
As AI systems become more capable, more accessible, and more embedded in everyday workflows, creativity is emerging as one of the most important human skills in AI development and deployment. Not creativity as decoration or aesthetics, but creativity as problem framing, decision-making, and human judgment. In an era where many organizations are using the same models, tools, and platforms, creative thinking is what separates meaningful outcomes from generic ones.