Launching Labs Marketing Named in “Top 30 Web Design Companies”
Launching Labs Marketing is recognized in the “Top 30 Denver Web Design Companies” in 2021 by Design Rush. Launching Labs builds websites and logos for small to medium businesses ready to define themselves and stand out.
DesignRush is a reliable online guide to finding the best professional companies and agencies categorized according to vertical and area of expertise. After evaluating and analyzing Launching Lab’s performance with some of the most prominent brands in the US, the online platform gave it a spot among the most reputable agencies in Denver.
Launching Labs Marketing was founded in 2017 by Mary Cochran and Alison Werning, two experienced marketing professionals. Cochran and Werning came together to help companies create cohesive visual and written stories. Their graphics and digital marketing expertise allow them to quickly create a look and feel for a company that truly represents it.
“We’re proud to be included in Design Rush’s Top 30 Web Design Companies in Denver. We work hard to get to the root of each customer’s differentiators and incorporate that into their websites and logos.”, said Mary Cochran, Launching Labs co-founder.
DesignRush evaluates thousands of agencies and is committed to helping brands find the best solutions for their needs. The platform has a listing that allows users to search partners based on clients, portfolios, reviews, pricing structure, and testimonials. Launching Labs Marketing’s inclusion on the Top 30 Web Design firms in Denver proves it’s one of the powerhouse firms in Colorado.
About Launching Labs
Launching Labs Marketing is a full-service agency that offers branding, web design, and marketing with a consultative approach. Designed to unleash your business and guide its growth, we start with logo and website design followed by marketing and PR to increase awareness, recognition, and sales. We help startups and small to medium businesses establish or grow recognition of their organization. We delve into your organization’s marketing needs and recommend a plan. Founded in Boulder, CO, Launching Labs services clients nationwide, in a range of industries including technology, non-profit, lifestyle, health, and apparel. Launching Labs can fill the gaps missing in internal teams. For more information, visit launchinglabs.com or call (720) 600-9059.
Everyone Is Talking About AI—But Do They Mean the Same Thing? by David Pring-Mill
Everyone Is Talking About AI—But Do They Mean the Same Thing?
Singularity Hub quotes Launching Labs in this article about Artificial Intelligence.
In 2017, artificial intelligence attracted $12 billion of VC investment. We are only beginning to discover the usefulness of AI applications. Amazon recently unveiled a brick-and-mortar grocery store that has successfully supplanted cashiers and checkout lines with computer vision, sensors, and deep learning. Between the investment, the press coverage, and the dramatic innovation, “AI” has become a hot buzzword. But does it even exist yet?
At the World Economic Forum Dr. Kai-Fu Lee, a Taiwanese venture capitalist and the founding president of Google China, remarked, “I think it’s tempting for every entrepreneur to package his or her company as an AI company, and it’s tempting for every VC to want to say ‘I’m an AI investor.’” He then observed that some of these AI bubbles could burst by the end of 2018, referring specifically to “the startups that made up a story that isn’t fulfillable, and fooled VCs into investing because they don’t know better.”
However, Dr. Lee firmly believes AI will continue to progress and will take many jobs away from workers. So, what is the difference between legitimate AI, with all of its pros and cons, and a made-up story?
If you parse through just a few stories that are allegedly about AI, you’ll quickly discover significant variation in how people define it, with a blurred line between emulated intelligence and machine learning applications.
I spoke to experts in the field of AI to try to find consensus, but the very question opens up more questions. For instance, when is it important to be accurate to a term’s original definition, and when does that commitment to accuracy amount to the splitting of hairs? It isn’t obvious, and hype is oftentimes the enemy of nuance. Additionally, there is now a vested interest in that hype—$12 billion, to be precise.
This conversation is also relevant because world-renowned thought leaders have been publicly debating the dangers posed by AI. Facebook CEO Mark Zuckerberg suggested that naysayers who attempt to “drum up these doomsday scenarios” are being negative and irresponsible. On Twitter, business magnate and OpenAI co-founder Elon Musk countered that Zuckerberg’s understanding of the subject is limited. In February, Elon Musk engaged again in a similar exchange with Harvard professor Steven Pinker. Musk tweeted that Pinker doesn’t understand the difference between functional/narrow AI and general AI.
Given the fears surrounding this technology, it’s important for the public to clearly understand the distinctions between different levels of AI so that they can realistically assess the potential threats and benefits.
As Smart As a Human?
Erik Cambria, an expert in the field of natural language processing, told me, “Nobody is doing AI today and everybody is saying that they do AI because it’s a cool and sexy buzzword. It was the same with ‘big data’ a few years ago.”
Cambria mentioned that AI, as a term, originally referenced the emulation of human intelligence. “And there is nothing today that is even barely as intelligent as the most stupid human being on Earth. So, in a strict sense, no one is doing AI yet, for the simple fact that we don’t know how the human brain works,” he said.
He added that the term “AI” is often used in reference to powerful tools for data classification. These tools are impressive, but they’re on a totally different spectrum than human cognition. Additionally, Cambria has noticed people claiming that neural networks are part of the new wave of AI. This is bizarre to him because that technology already existed fifty years ago.
However, technologists no longer need to perform the feature extraction by themselves. They also have access to greater computing power. All of these advancements are welcomed, but it is perhaps dishonest to suggest that machines have emulated the intricacies of our cognitive processes.
“Companies are just looking at tricks to create a behavior that looks like intelligence but that is not real intelligence, it’s just a mirror of intelligence. These are expert systems that are maybe very good in a specific domain, but very stupid in other domains,” he said.
This mimicry of intelligence has inspired the public imagination. Domain-specific systems have delivered value in a wide range of industries. But those benefits have not lifted the cloud of confusion.
Assisted, Augmented, or Autonomous
When it comes to matters of scientific integrity, the issue of accurate definitions isn’t a peripheral matter. In a 1974 commencement address at the California Institute of Technology, Richard Feynman famously said, “The first principle is that you must not fool yourself—and you are the easiest person to fool.” In that same speech, Feynman also said, “You should not fool the layman when you’re talking as a scientist.” He opined that scientists should bend over backwards to show how they could be wrong. “If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing—and if they don’t want to support you under those circumstances, then that’s their decision.”
In the case of AI, this might mean that professional scientists have an obligation to clearly state that they are developing extremely powerful, controversial, profitable, and even dangerous tools, which do not constitute intelligence in any familiar or comprehensive sense.
The term “AI” may have become overhyped and confused, but there are already some efforts underway to provide clarity. A recent PwC report drew a distinction between “assisted intelligence,” “augmented intelligence,” and “autonomous intelligence.” Assisted intelligence is demonstrated by the GPS navigation programs prevalent in cars today. Augmented intelligence “enables people and organizations to do things they couldn’t otherwise do.” And autonomous intelligence “establishes machines that act on their own,” such as autonomous vehicles.
Roman Yampolskiy is an AI safety researcher who wrote the book “Artificial Superintelligence: A Futuristic Approach.” I asked him whether the broad and differing meanings might present difficulties for legislators attempting to regulate AI.
Yampolskiy explained, “Intelligence (artificial or natural) comes on a continuum and so do potential problems with such technology. We typically refer to AI which one day will have the full spectrum of human capabilities as artificial general intelligence (AGI) to avoid some confusion. Beyond that point it becomes superintelligence. What we have today and what is frequently used in business is narrow AI. Regulating anything is hard, technology is no exception. The problem is not with terminology but with complexity of such systems even at the current level.”
When asked if people should fear AI systems, Dr. Yampolskiy commented, “Since capability comes on a continuum, so do problems associated with each level of capability.” He mentioned that accidents are already reported with AI-enabled products, and as the technology advances further, the impact could spread beyond privacy concerns or technological unemployment. These concerns about the real-world effects of AI will likely take precedence over dictionary-minded quibbles. However, the issue is also about honesty versus deception.
Is This Buzzword All Buzzed Out?
Finally, I directed my questions towards a company that is actively marketing an “AI Virtual Assistant.” Carl Landers, the CMO at Conversica, acknowledged that there are a multitude of explanations for what AI is and isn’t.
He said, “My definition of AI is technology innovation that helps solve a business problem. I’m really not interested in talking about the theoretical ‘can we get machines to think like humans?’ It’s a nice conversation, but I’m trying to solve a practical business problem.”
I asked him if AI is a buzzword that inspires publicity and attracts clients. According to Landers, this was certainly true three years ago, but those effects have already started to wane. Many companies now claim to have AI in their products, so it’s less of a differentiator. However, there is still a specific intention behind the word. Landers hopes to convey that previously impossible things are now possible. “There’s something new here that you haven’t seen before, that you haven’t heard of before,” he said.
According to Brian Decker, founder of Encom Lab, machine learning algorithms only work to satisfy their preexisting programming, not out of an interior drive for better understanding. Therefore, he views AI as an entirely semantic argument.
Decker stated, “A marketing exec will claim a photodiode controlled porch light has AI because it ‘knows when it is dark outside,’ while a good hardware engineer will point out that not one bit in a register in the entire history of computing has ever changed unless directed to do so according to the logic of preexisting programming.”
Although it’s important for everyone to be on the same page regarding specifics and underlying meaning, AI-powered products are already powering past these debates by creating immediate value for humans. And ultimately, humans care more about value than they do about semantic distinctions. In an interview with Quartz, Kai-Fu Lee revealed that algorithmic trading systems have already given him an 8X return over his private banking investments. “I don’t trade with humans anymore,” he said.