Jump to content

Catch all data analytics thread


Laxtonto

Recommended Posts

Needless to say, the thread title should be self explanatory.

After reading the CR thread regarding Facebook and Cambridge Analytics it has occurred to me that maybe this type of thread is needed on the Shag somewhere.

Lets start with the basics. The world is constantly generating structured and unstructured data. We are just now beginning to create both the technology and the techniques to really start using this voluminous amount of data to find real actionable information.  

This goes by many names. It has been called Management Science, or Decisions Support Systems, or Business Intelligence, or Data Analytics or Data Mining or Big Data or [Insert buzzword here]. 

This is an area I have been working and researching in for a while now, and I would be happy to discuss what I can. I am sure there are others on the Shag that can fill in the spots where I lack expertise. 

My area of specialty is in unstructured data, text to be more precise, and how that can then be utilized in predictive and prescriptive modeling. This means I play with everything from twitter, amazon reviews, to structured psychology interviews, to journal abstracts, to SEC fillings, as long as there are meaningful text involved, I'm there.

 

 

And 

This is actually an extremely loaded question, and probably not for the reasons you think. 

First I will provide a classic baseline and from there I will provide you my own personal answer.

The analytics discussion has fragmented into really 3 separate areas (and more than likely 4 if things continue at current pace). Visualizations, so descriptive statistics applications, simple dashboards and the like is the first group. This is the level of analytics most likely "seem" by the large majority of the business environment. This is many ways is also the hardest one to teach. Not because the skills are necessarily complex, but the visualization (and the implied ability to present the information in a functional manner at a level that the entire audience really understands) process requires both an intuitive grasp of the "problem" as well as a true understanding of both the strengths and limits of the data. 

From there you have pure predictive. This is exactly what it sounds like. Everything from simple regression and heuristic driven applications to much more complicated concepts like Neural Networks. The biggest issue here is that many people fail the basic issue in all statistics, that the data drives the method and analysis, not the other way. Depending on the data characteristics and the questions you are asking a wide variety of techniques are available. Never forget that prediction is just that, attempting to predict the future based upon some set of data and therefore is subject to both the assumptions of the modeling technique used and the error apparent with the data itself.

Finally is prescriptive. This is taking the raw data and even sometimes the predictive models you have already been built, and then use those in a way provide a what if level of interpretations. What f I change this input, what does it do. What if we change the production level to this, what does it flow into? This sounds extremely simple, and in some case it can be. Simple techniques like basic Linear and Nonlinear Optimization techniques can be used to provide best case scenarios and answers for when you change a limited number of input parameters or constraints. Then you have a wide variety of simulation techniques, which are much more interesting and complicated. This allows much crazier applications like changing the underlying distributions of input data, or manipulating full scale plant layouts or changing a wide variety of parameters and constraints simultaneously and then running those probabilistic models or "simulations" a multitude of times to determine the new distribution of potential results and best case worst scenarios and even simple expected values . This is one of the more complicated areas to work in because it requires an amazing grasp of what ever process you are simulating, a intuitive grasp of what levers you can manipulate and a very detailed intensive approach to not miss anything that is integral to the process. 

There is now a new "pillar" or area of analytics that is becoming an expected standalone category. This is data preparation, cleaning, manipulation, storage/acquisition. This is by far the most labor intensive, tedious, frustrating part of analytics. As we have more people in analytics, there are now people that specialize in this instead of having everyone do their own. The birth of economies of scale in analytics I guess. 

Now my personal view is the same answer I have always had, analytics does not really exist. It is just the extension of what we in management science have been doing since the 1940s, just the computing power and scale of the data has changed. It is just another consulting word, just like Business intelligence, or DSS or machine learning or AI. It really doesn't describe anything, it just tries to encapsulate a series of techniques under a new umbrella. 

This is actually driven at the academic levels over who will control "analytics" and who gets the resources tied to the program. If you learn analytics at a computer science or data science driven institution, it is going to be very method heavy and lack context. If you learn in a IS school, it is going to be data manipulation, data storage and the like. If you get it in a business school it will be outcome based tied to problems with context. The problem with this approach is that all require a large investment time and skills and so each emphasis comes at the cost of something else.

My personal philosophy is to teach the business approach and the logic skills and allow the students to spin up their techniques on their own. Evey industry is different, so if you understand the concepts, you can then figure out what you need to do or what additional skills you need to acquire to solve the problem. If you lack the business problem skills, the rest is really meaningless.

If you know analytics and can explain your answers at both the technical side and to the layman, I can always get you a job. Analytics is an area that seems to be treated as a special subset in many companies so you end up with more interaction with senior management teams than most traditional new employees. If you cant figure out what they actually want done, then what data they have and then a solution that you can explain that provides them a answer, you are useless in their eyes. The only way that happens is that you must also learn the problem definition and presentation skills to go with the analytic techniques.
 
 

 

  • Like 2
Link to comment
Share on other sites

Funny you bring this up...

The oil and gas industry is a little reticent to open the Big Data door, but it’s coming. The industry has always been viewed as open to new technology. Hydraulic Fracturing is a great example. At the same time, fracturing technology isn’t new and was only implemented because prices became high enough where it became economic. 

Now “bots” are being used to automate certain accounting procedures. And I feel like Big Data analytics is on the horizon. There has been whispers of it. And with the oodles of information now being collected in geology, drilling, reservoir, etc. I feel like there’s going to be Big Data talk sooner 

Link to comment
Share on other sites

12 hours ago, blacklab said:

I started on a project at work using word vectors to predict what companies are looking for what products.

Word vectors are a pretty cool thing that I didn't even know existed a few weeks ago.

It's all about ball bearings vertical search these days

Quote

The Case Against Google

 

Critics say the search giant is squelching competition before it begins.

Should the government step in?

 

By CHARLES DUHIGG

FEB. 20, 2018

Shivaun Moeran and Adam Raff met, married and started a company — thereby sparking a chain of events that might, ultimately, take down this age of internet giants as we know it — because they were both huge nerds. In the late 1980s, Adam was studying programming at the University of Edinburgh, while Shivaun was focused on physics and computer science at King’s College London. They had mutual friends who kept insisting they were perfect for each other. So one weekend, they went on a date and discovered other similarities: They both loved stand-up comedy. Each had a science-minded father. They shared a weakness for puns.

In the years that followed, those overlapping enthusiasms led to cohabitation, a raucous wedding and parallel careers at big technology firms. The thing is, though, when you’re young and geeky and fall in love with someone else young and geeky, all your nerdy friends want you to set them up on dates as well. So Adam and Shivaun, who took Adam’s last name after marriage, approached the problem like two good programmers: They designed a dating app.

The app was known as MatchMate, and the idea was simple: Rather than just pairing people with similar interests, their software would put together potential mates according to an array of parameters, such as which pub they were currently standing in, and whether they had friends in common, and what movies they liked or candidates they voted for, and dozens of other factors that might be important in finding a life partner (or at least a tonight partner). The magic of MatchMate was that it could allow a user to mix variables and search for pairings within a specific group, a trick that computer scientists call parameterization. “It was like asking your best friend to set you up,” Shivaun told me. “Someone who says, ‘Well, you probably think you’d like this guy because he’s handsome, but actually you’d like this other guy because he’s not as good-looking, but he’s really funny.’ ”

Within computer science, this kind of algorithmic alchemy is sometimes known as vertical search, and it’s notoriously hard to master. Even Google, with its thousands of Ph.D.s, gets spooked by vertical-search problems. “Google’s built around horizontal search, which means if you type in ‘What’s the population of Myanmar,’ then Google finds websites that include the words ‘Myanmar’ and ‘population,’ and figures out which ones are most likely to answer your question,” says Neha Narula, who was a software engineer at Google before joining the M.I.T. Media Lab. You don’t really care if Google sends you to Wikipedia or a news article or some other site, as long as its results are accurate and trustworthy. But, Narula says, “when you start asking questions with only one correct answer, like, Which site has the cheapest vacuum cleaner? — that’s much, much harder.”

For search engines like Google, finding that one correct answer becomes particularly difficult when people have numerous parameters they want satisfied: Which vacuum cleaner is cheapest but also energy-efficient and good on thick carpets and won’t scare the dog? To balance those competing preferences, you need a great vertical-search engine, which was something Adam and Shivaun had thought a lot about.

Soon the Raffs began daydreaming about turning their idea into a moneymaker. They didn’t have the funds to compete with huge dating sites like Match.com, so they applied for a couple of patents and began brainstorming. They believed that their vertical-search technology was good — better, in fact, than almost anything they had seen online. Best of all, it was built to work well on almost any kind of data set. With just a bit of tinkering, it could search for cheap airline tickets, or great apartments, or high-paying jobs. It could handle questions with hard-to-compare variables, like what’s the cheapest flight between London and Las Vegas if I’m trying to choose between business class or leaving after 3 p.m.?

 

As far as they could tell, their search technology performed better on such problems than Google did, which Adam discovered when he tried to buy an iPod online. “I spent half an hour searching Google for the lowest price, and it drove me completely mad,” he told me. It was impossible for him to figure out which sites were selling iPods and which were selling accessories, like headphones or charging cords. Or Google would show Adam one price, but then the actual price was completely different. Or there was an extra charge for shipping. It seemed to Adam his technology would do a much better job.

 

Google executives, had they known of Adam’s frustrations, probably wouldn’t have been surprised. For years, Google had been trying to build a tool for comparing online prices. “The idea was you should be able to input any item, and we’d show you the best place to buy it,” says Brian Larson, a technical lead for what was then named Froogle and today is called Google Shopping. Larson’s team was small — just himself and one other programmer at first, and roughly a dozen people at its height — and Larson would regularly test how Froogle compared with other online price-comparison services. “Sometimes we were neck and neck; sometimes, not so much,” Larson said. “We had a hundred million product listings, which was better than competitors.” But they were often outperformed by sites like PriceGrabber.com, which had many more employees devoted to price comparisons.

 

Froogle’s limitations tended to pop up particularly when users included too many search parameters. For a while, Larson had a specific test search that Froogle kept failing, something like “white running shoes and cheap and free shipping.” Inevitably, the first result would be a Christmas elf wearing running shoes that some guy was selling online. No matter how Google’s engineers fiddled with their coding, they couldn’t stop the elf from appearing as the top link. Eventually, a manager bought the elf so it wouldn’t appear in the search results anymore. “We made elf T-shirts,” Larson told me. “It became our mascot.”

 

Adam and Shivaun’s technology was good enough to tell the difference between an elf wearing running shoes and an actual pair of running shoes. It was good enough, in fact, to figure out which websites charged hidden shipping fees and which offered truly good deals. So the Raffs quit their jobs, hired a few programmers, spent months perfecting their technology and, in early 2006, unveiled Foundem.com, a vertical-search engine for finding cheap online prices, to a small group of friends and associates. Each time someone used Foundem to buy something, the Raffs would receive a small payment from the website making the sale. Adam and Shivaun weren’t sure their company would succeed — there were already a couple of other big price-comparison search engines, like PriceGrabber, NexTag and, of course, Google itself — but they figured this was how the internet was supposed to work: Two people with a new idea can take on giants and, if their technology is good enough, grow into colossi themselves.

 

The Raffs knew they would have to rely on Google to find customers. For one thing, as evidenced by the name Foundem, they weren’t marketing geniuses. (“It’s like we found ’em for you, you know?” Shivaun explained.) But early tests indicated that Foundem usually came up high in Google’s search results whenever people submitted queries like “compare prices xr-1000 motorcycle helmets.” Six months later, they opened Foundem to the world, and initial traffic was encouraging. “Search engines liked the site,” Shivaun told me. “That’s supposed to be the recipe for success.” As long as their vertical-search technology was strong, the Raffs figured, Google would guide shoppers to their door.

 

Google has succeeded where Genghis Khan, communism and Esperanto all failed: It dominates the globe. Though estimates vary by region, the company now accounts for an estimated 87 percent of online searches worldwide. It processes trillions of queries each year, which works out to at least 5.5 billion a day, 63,000 a second. So odds are good that sometime in the last week, or last hour, or last 10 minutes, you’ve used Google to answer a nagging question or to look up a minor fact, and barely paused to consider how near-magical it is that almost any bit of knowledge can be delivered to you faster than you can type the request. If you’re old enough to remember the internet before 1998, when Google was founded, you’ll recall what it was like when searching online involved AltaVista or Lycos and consistently delivered a healthy dose of spam or porn. (Pity the early web enthusiasts who innocently asked Jeeves about “amateurs” or “steel.”)

 

In other words, it’s very likely you love Google, or are at least fond of Google, or hardly think about Google, the same way you hardly think about water systems or traffic lights or any of the other things you rely on every day. Therefore you might have been surprised when headlines began appearing last year suggesting that Google and its fellow tech giants were threatening everything from our economy to democracy itself. Lawmakers have accused Google of creating an automated advertising system so vast and subtle that hardly anyone noticed when Russian saboteurs co-opted it in the last election. Critics say Facebook exploits our addictive impulses and silos us in ideological echo chambers. Amazon’s reach is blamed for spurring a retail meltdown; Apple’s economic impact is so profound it can cause market-wide gyrations. These controversies point to the growing anxiety that a small number of technology companies are now such powerful entities that they can destroy entire industries or social norms with just a few lines of computer code. Those four companies, plus Microsoft, make up America’s largest sources of aggregated news, advertising, online shopping, digital entertainment and the tools of business and communication. They’re also among the world’s most valuable firms, with combined annual revenues of more than half a trillion dollars.

 

In a rare display of bipartisanship, lawmakers from both political parties have started questioning how these tech giants grew so powerful so fast. Regulators in Missouri, Utah, Washington, D.C., and elsewhere have called for greater scrutiny of Google and others, citing antitrust concerns; some critics have suggested that our courts and legislatures need to go after tech firms in the same way the trustbusters broke up oil and railroad monopolies a century ago. But others say that Google and its cohort are guilty only of delighting customers. If these tech leviathans ever fail to satisfy us, their defenders argue, capitalism will punish them the same way it once brought down Yahoo, AOL and Myspace.

 

At the core of this debate is a question that is more than a century old: When does a megacompany’s behavior become so brazen that it violates the law? In the early 1900s, just after the Industrial Revolution, the federal government provided an answer by suing one of America’s largest companies, Standard Oil, on the novel theory that big becomes bad when a giant uses its dominance not only to defeat its competitors but also to extinguish the possibility that competition might occur.

 

In its technological innovation, Standard Oil was the Google of its day. The company’s founder, John D. Rockefeller, had become the richest man in America by spending millions of dollars hiring scientists to transform how oil was refined and transported. And those innovations earned the public’s admiration. In 1858, before Standard Oil was founded, lighting a home required whale oil, which cost up to $3 a gallon, putting illumination out of reach for all but the wealthiest of households. By 1885, after Standard Oil figured out how to refine kerosene, it cost just 8 cents a gallon to brighten the night. “Let the good work go on,” Rockefeller wrote to a partner. “We must ever remember we are refining oil for the poor man and he must have it cheap and good.”

 

Standard Oil’s technological discoveries gave the company huge advantages over its rivals, and Rockefeller exploited those advantages ruthlessly. He cut secret deals with railroads so that other firms had to pay more for transportation. He forced smaller refineries to choose between selling out to him or facing bankruptcy. “Rockefeller and his associates did not build the Standard Oil Co. in the boardrooms of Wall Street,” wrote Ida Tarbell, a muckraking journalist of the day. “They fought their way to control by rebate and drawback, bribe and blackmail, espionage and price cutting, and perhaps more important, by ruthless, never slothful efficiency of organization.”

 

In 1906, President Theodore Roosevelt ordered his Justice Department to sue Standard Oil for antitrust violations. But government lawyers faced a quandary: It wasn’t illegal for Standard Oil to be a monopoly. It wasn’t even illegal to compete mercilessly. So government prosecutors found a new argument: If a firm is more powerful than everyone else, they said, it can’t simply act like everyone else. Instead, it has to live by a special set of rules, so that other companies get a fair shot. “The theory was that competition is good, and if a monopoly extinguishes competition, that’s bad,” says Herbert Hovenkamp, co-author of a seminal treatise on antitrust law. “Once you become a monopoly, you have to start acting differently, and if you don’t, then what you’ve been doing all along starts breaking the law.”

 

The Supreme Court agreed and split Standard Oil into 34 firms. (Rockefeller received stock in all of them and became even wealthier.) In the decades following the Standard Oil breakup, antitrust enforcement generally abided by a core principle: When a company grows so powerful that it becomes a gatekeeper, and uses that might to undermine competitors, then the government should intervene. And in the last century, as courts have censured other monopolies, academics and jurists have noticed a pattern: Monopolies and technology often seem intertwined. When a company discovers a technological advantage — like the innovations of Rockefeller’s scientists — it sometimes makes that firm so powerful that it becomes a monopoly almost without trying very hard. Many of the most important antitrust lawsuits in American history — against IBM, Alcoa, Kodak and others — were rooted in claims that one company had made technological discoveries that allowed it to outpace competitors.

 

For decades, there seemed to be a consensus among policymakers and business leaders (though not always among targeted companies) about how the antitrust laws should be enforced. But around the turn of this century, a number of tech companies emerged that caused some people to question whether the antitrust formula made sense anymore. Firms like Google and Facebook have become increasingly useful as they have grown bigger and bigger — a characteristic known as network effects. What’s more, some have argued that the online world is so fast-moving that no antitrust lawsuit can keep pace. Nowadays even the biggest titan can be defeated by a tiny start-up, as long as the newcomer has better ideas or faster tech. Antitrust laws, digital executives said, aren’t needed anymore.

 

Consider Microsoft. The government spent most of the 1990s suing Microsoft for antitrust violations, a prosecution that many now view as a complete waste of time and money. When Microsoft’s chief executive, Bill Gates, signed a consent decree to resolve one of its monopoly investigations in 1994, he told a reporter that it was essentially pointless for the company’s various divisions: “None of the people who run those divisions are going to change what they do or think.” Even after a federal judge ordered Microsoft broken into separate companies in 2000, the punishment didn’t take. Microsoft fought the ruling and won on appeal. The government then offered a settlement so feeble that nine states begged the court to reject the proposal. It was approved.

 

What eventually humbled Bill Gates and ended Microsoft’s monopoly wasn’t antitrust prosecutions, observers say, but a more nimble start-up named Google, a search engine designed by two Stanford Ph.D. dropouts that outperformed Microsoft’s own forays into search (first MSN Search and now Bing). Then those two dropouts introduced a series of applications, like Google Docs and Google Sheets, that eventually began to compete with almost every aspect of Microsoft’s businesses. And Google did all that not by relying on government prosecutors but by being smarter. You don’t need antitrust in the digital marketplace, critics argue. “When our products don’t work or we make mistakes, it’s easy for users to go elsewhere because our competition is only a click away,” Google’s co-founder, Larry Page, said in 2012. Translation: The government ought to stop worrying, because no online giant will ever survive any longer than it deserves to.

 

Once Foundem.com was available to everyone, the company’s honeymoon lasted precisely two days. During its first 48 hours, the Raffs saw a rush of traffic from users typing product queries into Google and other search engines. But then, suddenly, the traffic stopped. Alarmed, Adam and Shivaun began running diagnostics. They quickly discovered that their site, which until then had been appearing near the top of search results, was now languishing on Google, mired 12 or 15 or 64 or 170 pages down. On other search engines, like MSN Search and Yahoo, Foundem still ranked high. But on Google, Foundem had effectively disappeared. And Google, of course, was where a vast majority of people searched online.

 

The Raffs wondered if this could be some kind of technical error, so they began checking their coding and sending email to Google executives, begging them to fix whatever was causing Foundem to vanish. Figuring out whom to write, and how to contact them, was a challenge in itself. Although Google’s parent company bills itself as a diversified firm with about 80,000 employees, almost 90 percent of the company’s revenues derive from advertisements, like the ones that show up in search. As a result, there are few things more important to Google’s executives than protecting the firm’s search dominance, particularly among the most profitable kinds of queries, such as those of users looking to buy things online. In fact, at about the same time the Raffs were starting Foundem.com, Google executives were growing increasingly concerned about the threats that vertical-search engines posed to Google’s business.

 

“What is the real threat if we don’t execute on verticals?” one Google executive emailed his colleagues in 2005, according to internal documents later shared with the Federal Trade Commission. “Loss of traffic from Google.com because folks search elsewhere for some queries,” he wrote, in answer to his own question. “If one of our big competitors builds a constellation of high-quality verticals, we are hurt badly,” the internal documents continued. Another executive put it more bluntly: “Google’s core business is monetizing commercial queries. If users go to competitors such as Amazon to do product queries, long-term revenue will suffer.”

 

Google executives began holding battle-plan meetings for the vertical war. Shortly after Foundem.com went online, one executive issued an order: Henceforth, Google’s own price-comparison results should appear at the top of many search pages, as quickly as possible, even if that meant disregarding the natural results of the company’s search algorithm. “Long term, I think we need to commit to a more aggressive path,” a high-ranking Google employee wrote to colleagues. Eventually, a mandate came from the chief executive: “Larry thought product should get more exposure,” a senior official wrote.

 

One way to get that exposure was to influence the rules governing how Google displayed search results. In 2006, Google instituted a shift in its search algorithm, known as the Big Daddy update, which penalized websites with large numbers of subpages but few inbound links. A few years later, another shift, known as Panda, penalized sites that copied text from other websites. When adjustments like these occurred, Google explained to users, they were aimed at combating “individuals or systems seeking to ‘game’ our systems in order to appear higher in search results — using low-quality ‘content farms,’ hidden text and other deceptive practices.”

 

Left unsaid was that Google itself generates millions of new subpages without inbound links each day, a fresh page each time someone performs a search. And each of those subpages is filled with text copied from other sites. By programming its search engine to ignore other sites doing the same thing that Google was doing, critics say, the company had made it nearly impossible for competing vertical-search engines, like Foundem, to show up high in Google’s results.

 

Shivaun and Adam sent email after email to Google executives, but no one responded with anything useful. So the Raffs started making phone calls. Those didn’t help much, either. Adam and Shivaun had worked in technology for decades. They were well known and had connections to important people inside Google and at other big firms. But none of that seemed to matter.

 

As the months went by and Foundem’s bank accounts dwindled, the Raffs, desperate, began approaching other websites, offering to adapt their technology to power those sites’ internal search engines. Soon they were providing back-end technology for a popular motorcycle site and a large magazine publisher. Eventually, about 2.5 million people were seeing Foundem’s search results each month. Foundem was named one of Britain’s best travel comparison sites by The Times of London and celebrated on a popular British gadget show. But without traffic from Google, the Raffs were barely holding on.

 

Three years passed this way. Some nights, Shivaun would sit at her computer, exhausted, Googling phrase after phrase — How do you lift a Google website penalty? Who at Google reviews mistakes? Google and deindexed and phone number and help — hoping that some magic combination of words might yield a new solution. “It just felt so unfair,” Shivaun told me. “We had great technology. It was winning awards. But we couldn’t even get an explanation from Google about why we weren’t showing up.” Eventually, they sought out a public relations firm, in the hope that a newspaper article might get Google’s attention. The P.R. firm had an additional suggestion: Why not file an antitrust complaint? To Adam and Shivaun, that seemed like a waste of time. If Microsoft had been able to shrug off the antitrust attacks of the United States government, why would Google care about a complaint filed by some small firm?

 

But they didn’t see many other options. So Adam and Shivaun pulled out their laptops and began assembling a long document detailing everything they had experienced. Then they went to Brussels, to the headquarters of the European Commission, the agency charged with regulating competitive behavior, and filed a complaint accusing Google of violating antimonopoly laws.

 

As the years passed, Shivaun and Adam got into the habit of visiting message boards where people obsessively discussed Google’s many peculiarities. They began to notice an interesting pattern among companies complaining about the search giant: Often, the aggrieved parties had, in some way, posed some kind of threat to Google’s business. And they seemed to have suffered dire consequences.

 

There was, for instance, Skyhook Wireless, which had invented a new navigation system that competed with Google’s location software and had signed major deals with the cellphone manufacturers Samsung and Motorola. Skyhook’s accuracy “is better than ours,” one Google manager speculated in an internal email later revealed in a lawsuit filed by Skyhook against Google. Not long after that note was written, according to the lawsuit, a high-ranking Google official pressured Samsung and Motorola to end their relationships with Skyhook — and implied that if they didn’t, Google could make it impossible for them to ship their phones on time. (Google has denied doing anything inappropriate.) Soon, Samsung and Motorola canceled their Skyhook contracts. Skyhook sued Google, and though one suit was dismissed, Google ended up paying $90 million to settle a patent-infringement claim. But by then it was too late. Skyhook’s founders, bereft of other partnership options, had been forced to sell their company at a large discount.

 

Then there was Yelp, a website with millions of user-generated reviews of local brewpubs, auto-body shops and other businesses. Yelp grew quickly as local queries — like “best nearby steakhouse” — became a third of all online searches. For years, Yelp appeared near or at the top of millions of Google searches. Google, hoping to capitalize on that traffic, tried to buy Yelp in 2009, but Yelp’s founders rejected those advances. Then Google started pulling Yelp’s content into its own results, which meant many users didn’t have to visit Yelp’s website. Yelp complained — to Google and later to the F.T.C. — but Google said the only alternative was for Yelp to remove its content from Google altogether, according to documents filed with federal regulators. The same thing happened at other fast-growing review sites like TripAdvisor and Citysearch, which also complained to the F.T.C. “We still exist,” says Luther Lowe, a vice president at Yelp, “but Google did everything it could to ensure that we’d never present a threat to them. It’s bullying, but they’re the 800-pound gorilla.”

 

The more Adam and Shivaun looked, the more examples they found. Getty Images had created a popular search engine to help users comb through the firm’s 170 million photographs and other visual art. Then, in 2013, Google adjusted how it displayed images so that rather than directing people to Getty’s website, users could easily see and download Getty’s high-definition images from Google itself. “Our traffic immediately fell 85 percent,” says Yoko Miyashita, Getty’s general counsel. “We wrote to Google, and said, Hey, this isn’t cool. And their response was, ‘Well, if you don’t agree to these terms, we’ll just exclude you’ ” — by letting Getty remove itself from the search engine entirely, Miyashita said. “That’s not really a choice, because if you aren’t on Google, you basically don’t exist.”

 

TradeComet.com, which operated a vertical-search engine for finding business products, initially prospered by buying ads on Google, but as the site grew, Google “raised my prices by 10,000 percent, which strangled our business virtually overnight,” the company’s C.E.O. at the time, Dan Savage, said when he filed an antitrust lawsuit in 2009. KinderStart.com, a vertical-search engine for parents, sued Google after it received a “PageRank” of zero, making it essentially unfindable. (TradeComet.com’s suit was dismissed on a technicality; KinderStart.com’s was dismissed for insufficient evidence.)

 

Shivaun and Adam filled notepads with the names of companies that had complained about Google’s tactics — eJustice, a vertical-search engine for legal information; NexTag, the fellow price-comparison site; BDZV, a group of German newspapers. They printed out lawsuits and regulatory complaints until their living room was a maze of paper.

 

Eventually the Raffs reached out to the F.T.C., which, they knew, was the American equivalent of the European Commission’s antitrust office, and the U.S. regulators invited them to visit. The F.T.C.’s staff, it turned out, had been quietly collecting complaints about Google for years. In 2012, those officials wrote a confidential 160-page report that said Google had “adopted a strategy of demoting, or refusing to display, links to certain vertical websites in highly commercial categories.” That memo, about half of which was accidentally sent to reporters at The Wall Street Journal after they submitted a Freedom of Information Act request, said that “Google’s conduct has resulted — and will result — in real harm to consumers and to innovation.”

 

“Google has strengthened its monopolies over search and search advertising through anticompetitive means,” which “will have lasting negative effects on consumer welfare,” F.T.C. officials wrote. They cited instances in which Google seemed purposely to be privileging less useful information, substandard search results and suboptimal links. “Although it displays its flight search above any natural search results for flight-booking sites, Google does not provide the most flight options for travelers,” the regulators wrote. Whereas a decade earlier someone searching for steakhouses would have seen a long list of websites, now the most noticeable results pointed to Google’s own listings, including Google maps, Google local search or advertisers paying Google. Some F.T.C. staff recommended “that the Commission issue a complaint against Google” for copying material and certain advertising and contract practices, though not search-engine bias.

 

Google responded to the report’s claims by arguing that the changes it made to the search engine benefited users. “Our testing has consistently showed that users want quick answers to their queries,” Google said in a statement when contacted about this article. “If you are searching for weather, you probably want a forecast, not just links to weather sites.” And when it comes to online shopping, the statement read, “if someone is searching for products, they likely want information about price and where they can buy it. They probably don’t want to be taken to another site where they have to enter their search again. . . . We absolutely do not make changes to our search algorithm to disadvantage competitors.” Claims to the contrary, like those made by Foundem, are untrue, Google maintained. “We make hundreds of changes to search every year, all with the same goal: Delivering users the best, most relevant search results,” the company continued. “Each change, large and small, affects millions of sites, some who see their rankings improve, others who drop.” And, Google concluded, “our ultimate responsibility is to deliver the best results possible to our users, not specific placements for sites within our results.”

 

When the F.T.C.’s politically appointed leadership considered the staff’s recommendations, they declined to sue Google, surprising many inside the agency. “While not everything Google did was beneficial, on balance, we did not believe that the evidence supported an F.T.C. challenge,” the agency’s chairman at the time, Jon Leibowitz, said when he announced the decision in 2013.

 

The F.T.C.’s decision, according to agency insiders, was motivated in part by a debate that has also sparked battles within antitrust courts over the last 40 years: Should the law protect consumers or encourage competition? They’re not always synonymous. “It wasn’t consumers who were complaining about Standard,” says Hovenkamp, the antitrust scholar. “It was the other oil companies.” Similarly, few users are kvetching about Google; it’s primarily other tech firms. United States judges have increasingly held that the government must show consumer harm to win in court.

 

Adam and Shivaun didn’t have to wait for the official F.T.C. announcement to know that their case was going nowhere. Meeting with officials in Washington, they could tell: These people were not going to prosecute. They had come to the United States at their own expense. They had written memo after memo arguing that Google was treating them unfairly and as a result hurting users. They had done everything they were asked. Standard Oil controlled 64 percent of the market for refined petroleum when the Supreme Court broke it into dozens of pieces. Google and Facebook today control an estimated 60 to 70 percent of the U.S. digital advertising market. And the F.T.C. seemed happy to let them keep doing it. To the Raffs, it felt as if history was repeating itself, as if the pointless, ineffectual Microsoft case was happening all over again. It felt as if nobody cared.

 

If you are younger than 29 — which just happens to be the average age of a Google employee, according to a survey done by PayScale — then odds are good you don’t remember much about the Microsoft antitrust battles of the 1990s. So, a quick primer: For almost a decade, starting in 1993, federal and state prosecutors besieged Microsoft in courtrooms across the nation, arguing that the company had acted in ways that were predatory and dishonest to preserve its software monopoly. One Microsoft executive was quoted in court as threatening to “cut off” the “air supply” of a competitor. “Is Bill Gates the ’90s answer to Don Corleone?” Time magazine asked. “I expected to find a bloody computer monitor in my bed,” a witness told investigators.

 

Along the way, Microsoft was accused of widespread bullying, coercion and general obnoxiousness. And Microsoft basically said: Whatever. “There’s one guy in charge of licenses,” Bill Gates told reporters after he signed a consent decree with the Department of Justice in 1994. “He’ll read the agreement.” Everyone else, the implication was, would ignore it.

 

Even when a judge ruled in 2000 that Microsoft was violating antitrust law, conventional wisdom held that the victory was largely pyrrhic. Microsoft successfully appealed, and prosecutors eventually threw in the towel, agreeing to abandon their attacks and settle if Microsoft agreed to token reforms, such as making its products more compatible with competitors’ software and giving three independent observers unfettered access to the company’s records, employees and source code. Microsoft’s executives thought that three observers, versus 48,000 employees, sounded like pretty good odds.

 

This was the history the Raffs recalled when they heard the F.T.C. was abandoning its investigation. But then, they also remembered a discussion they had once had with a lawyer named Gary Reback, who told them that everything they’d heard about the Microsoft trials was wrong. Reback is something of a legend in Silicon Valley, both because of his accomplishments as an antitrust provocateur and because of his anxious — some might say paranoid — worldview. Reback has been known to call other lawyers late at night and leave long, obsessively detailed voice mail messages about legal arguments and economic theories. He was featured on a 1997 cover of Wired magazine with the headline “This Lawyer Is Bill Gates’s Worst Nightmare,” a boast that wasn’t far-off: Working on behalf of clients like Netscape and Sun Microsystems, Reback had browbeaten the Department of Justice into suing Microsoft for antitrust.

 

By the time Adam and Shivaun started visiting the F.T.C., Reback had exchanged his antipathy of Microsoft for a disdain of Google and had accompanied them on their visits with regulators. There’s a loose coalition of economists and legal theorists who call themselves the New Brandeis Movement (critics call them “antitrust hipsters”), who believe that today’s tech giants pose threats as significant as Standard Oil a century ago. “All of the money spent online is going to just a few companies now,” says Reback (who disdains the New Brandeis label). “They don’t need dynamite or Pinkertons to club their competitors anymore. They just need algorithms and data.”

 

Reback had told Adam and Shivaun that it was important for them to keep up their fight, no matter the setbacks, and as evidence he pointed to the Microsoft trial. Anyone who said that the 1990s prosecution of Microsoft didn’t accomplish anything — that it was companies like Google, rather than government lawyers, that humbled Microsoft — didn’t know what they were talking about, Reback said. In fact, he argued, the opposite was true: The antitrust attacks on Microsoft made all the difference. Condemning Microsoft as a monopoly is why Google exists today, he said.

 

Surprisingly, some people who worked at Microsoft in the 1990s and early 2000s agree with him. In the days when federal prosecutors were attacking Microsoft day and night, the company might have publicly brushed off the salvos, insiders say. But within the workplace, the attitude was totally different. As the government sued, Microsoft executives became so anxious and gun-shy that they essentially undermined their own monopoly out of terror they might be pilloried again. It wasn’t the consent decrees or court decisions that made the difference, according to multiple current and former Microsoft employees. It was “the constant scrutiny and being in the newspaper all the time,” said Gene Burrus, a former Microsoft lawyer. “People started second-guessing themselves. No one wanted to test the regulators anymore.”

 

In public, Bill Gates was declaring victory, but inside Microsoft, executives were demanding that lawyers and other compliance officials — the kinds of people who, previously, were routinely ignored — be invited to every meeting. Software engineers began casually dropping by attorneys’ desks and describing new software features, and then asking, in desperate whispers, if anything they’d mentioned might trigger a subpoena. One Microsoft senior executive moved an extra chair into his office so a compliance official could sit alongside him during product reviews. Every time a programmer detailed a new idea, the executive turned to the official, who would point his thumb up or down like a capricious Roman emperor.

 

In the early 2000s, Microsoft’s top executives told some divisions that their plans would be proactively shared with competitors — literally describing what the company intended to create before software was even built — to make sure it wouldn’t offend anyone who was likely to sue. Microsoft’s engineers were outraged. But they went along with it.

 

And most important, as Microsoft lived under government scrutiny, employees abandoned what had been nascent internal discussions about crushing a young, emerging competitor — Google. There had been informal conjectures about reprogramming Microsoft’s web browser, the popular Internet Explorer, so that anytime people typed in “Google,” they would be redirected to MSN Search, according to company insiders. Or, perhaps a warning message might pop up: “Did you know Google uses your data in ways you can’t control?”

 

Microsoft was so powerful, and Google so new, that the young search engine could have been killed off, some insiders at both companies believe. “But there was a new culture of compliance, and we didn’t want to get in trouble again, so nothing happened,” Burrus said. The myth that Google humbled Microsoft on its own is wrong. The government’s antitrust lawsuit is one reason that Google was eventually able to break Microsoft’s monopoly.

 

“If Microsoft hadn’t been sued, all of technology would be different today,” Reback told me. We’ve known since Standard Oil that advances in technology make it easier for monopolies to emerge. But what’s less recognized is the importance of antitrust in making sure those new technologies spread to everyone else. In 1969 the Justice Department started a lawsuit against IBM for antitrust violations that lasted 13 years. The government eventually surrendered, but in an earlier attempt to mollify prosecutors, IBM eliminated its practice of bundling hardware and software, a shift that essentially created the software industry. Suddenly, new start-ups could get a foothold simply by writing programs rather than building machines. Microsoft was founded a few years later and soon outpaced IBM.

 

Or consider AT&T, which was sued by the government in 1974, fought in court for eight years and then slyly agreed to divest itself of some businesses if it could keep its most valuable assets. Critics complained AT&T was getting the deal of a lifetime. But then start-ups like Sprint and MCI made millions building on technologies AT&T championed, and AT&T found itself struggling to compete. It’s completely wrong to say that antitrust doesn’t matter, Reback argues. “The internet only exists because we broke up AT&T. The software industry exists because Johnson sued IBM.”

 

It was critical that the Raffs continue fighting, Reback told them. Social embarrassment and sustained attacks have the power to succeed when courtrooms or political agencies fail. After their F.T.C. disappointment, the Raffs flew back to England to consider their options. And then one night they were at home watching television when the phone rang. Someone they had met in Brussels was calling to share some remarkable news. The European Commission had issued a decision on the complaint they filed six years before.

 

What changed everything was a middle-aged Danish politician named Margrethe Vestager, who had recently been named the European Union’s commissioner for competition. Vestager was an unusual choice for the post. She wasn’t a populist crusader or a pro-business acolyte; she was, instead, a moderate whose claim to fame, at that point, was having served as an inspiration for the television show “Borgen,” a fictional series about a Danish politician. But Vestager was awarded the commissioner’s post in 2014 after arguing that European marketplaces needed to do a better job of giving everyone an equal chance to succeed. Since assuming her office, Vestager has become, unexpectedly, the most prominent antitrust official in the world, invited to speak at conferences and mobbed by autograph seekers.

 

By the time Vestager took office, Google had already transitioned its price-comparison service to its present incarnation, which is effectively an advertising system that prominently features links only from companies that pay for the promotion. (Users are notified by a small logo that says “sponsored.”) After reviewing the complaints submitted by the Raffs and others, Vestager announced she intended to formally charge Google with antitrust violations. (She has also embarked on investigations into the European tax practices of Starbucks, Amazon and Apple, as well as anticompetitive tactics at Qualcomm, Facebook and Gazprom.)

 

Over the next two years, Vestager’s staff reviewed data from 1.7 billion Google queries. They scrutinized how people fared when they conducted searches on topics in which Google had a vested interest, versus those where the company had nothing to gain. Then, in June of last year, the commission issued its final verdict: “What Google has done is illegal under E.U. antitrust rules,” Vestager said in a statement released at the time. “It denied other companies the chance to compete on the merits and to innovate. And most important, it denied European consumers a genuine choice of services and the full benefits of innovation.” Google was ordered to stop giving its own comparison-shopping service an illegal advantage and was fined an eye-popping $2.7 billion, the largest such penalty in the European Commission’s history and more than twice as large as any such fine ever levied by the United States.

 

The verdict rocked Silicon Valley. Some think Europe’s assertiveness makes it more likely American regulators will act as well. And there’s evidence that’s already starting. Donald Trump appealed to voters, in part, by attacking the tech monopolies. In a case of truly odd bedfellows, that puts him in alignment with Elizabeth Warren and Bernie Sanders, who have long called for greater scrutiny of technology companies. Last year, a group of Democratic lawmakers in Congress, led by Senator Amy Klobuchar of Minnesota, sponsored legislation to boost antitrust enforcement by forcing companies to assume the burden of showing that a merger won’t hurt the public.

 

Meanwhile, a bipartisan assortment of state attorneys general have urged the F.T.C. to reopen its investigation of Google. Most major antitrust battles, including the federal suits against Microsoft and Standard Oil, have begun as state actions. A Missouri investigation is particularly notable because the state’s Republican attorney general, Josh Hawley, who is running for the United States Senate, has subpoenaed information to see if Google has manipulated searches to disadvantage potential competitors. “The Obama-era F.T.C. did not take any enforcement action against Google, did not press this forward and has essentially given them a free pass,” Hawley told reporters after revealing his inquiry in November. “I will not let Missouri consumers and businesses be exploited by industry giants.”

 

As attacks against Google have escalated, the company has tried to limit the damage. After Yelp complained to the F.T.C. about Google’s stealing its content, Google promised to make it easier for websites to opt out of automatic copying, a pledge it reaffirmed a few months ago. And earlier this month, in exchange for Getty Images’ withdrawing its complaint to the European Commission, Google signed a licensing agreement with Getty promising to more clearly display images’ copyright information. Other titans like Facebook are similarly trying to get ahead of criticisms, voluntarily pledging greater transparency and promising to work more cooperatively with regulators.

 

The implication is clear enough: Google and the other tech titans understand that the landscape is shifting. They realize that their halos have become tarnished, that the arguments they once invoked as a digital exception to American economic history — that the internet economy is uniquely self-correcting, because competition is only a click away — no longer hold as much weight. “When you get as big as Google, you become so powerful that the market bends around you,” Vestager told me. The notion that antitrust law isn’t needed anymore, that we must choose between helping consumers or spurring competition, no longer seems sufficient reason to exempt the tech giants from century-old legal codes. If anything, Vestager’s verdict and state investigations indicate that companies like Google may have more in common with the monopolists of old than most people thought. Silicon Valley’s bigwigs ought to be scared.

 

“If Europe can prosecute Google, then we can as well,” says William Kovacic, a law professor and former Republican-appointed chairman of the Federal Trade Commission. “It’s just a question of willingness now.”

 

If the internet’s potentates are frightened, however, they’re doing a good job of hiding it. Google has appealed the European Commission’s decision and has vigorously defended itself online. The company’s arguments are the same ones that it was putting forth on company blogs over the course of the investigation. “We disagree with the European Commission’s argument that our improved Google Shopping results are harming competition,” Google’s top lawyer wrote in one post. The commission “drew such a narrow definition around online shopping services that it even excluded services like Amazon,” undermining the contention that Google is dominant. “Google delivered more than 20 billion free clicks to aggregators over the last decade,” he wrote in another post. Forcing it to “direct more clicks to price-comparison aggregators would just subsidize sites that have become less useful for consumers.” Google’s data indicates that users appreciate how the search engine has shifted over the years. “That’s not ‘favoring’ ” Google’s interests, the company said. “That’s giving customers and advertisers what they find most useful.”

 

Some legal theorists think that Google might have a point. “To what extent are consumers, rather than competitors, being harmed by Google?” says Hovenkamp, the antitrust scholar. “If the answer is ‘not much,’ then I’m suspicious of an antitrust remedy.” Others say the risks are too high. “There are very real costs associated with suing a company like Google,” says Geoffrey Manne, executive director of the International Center for Law & Economics, a nonpartisan research center. “You’re potentially impairing a firm that provides vital services to millions of people, and potentially benefiting competitors who don’t deserve that support.”

 

Those are fair arguments. But they are also, in some ways, beside the point. Antitrust has never been just about costs and benefits or fairness. It’s never been about whether we love the monopolist. People loved Standard Oil a century ago, and Microsoft in the 1990s, just as they love Google today.

 

Rather, antitrust has always been about progress. Antitrust prosecutions are part of how technology grows. Antitrust laws ultimately aren’t about justice, as if success were something to be condemned; instead, they are a tool that society uses to help start-ups build on a monopolist’s breakthroughs without, in the process, being crushed by the monopolist. And then, if those start-ups prosper and make discoveries of their own, they eventually become monopolies themselves, and the cycle starts anew. If Microsoft had crushed Google two decades ago, no one would have noticed. Today we would happily be using Bing, unaware that a better alternative once existed. Instead, we’re lucky a quixotic antitrust lawsuit helped to stop that from happening. We’re lucky that antitrust lawyers unintentionally guaranteed that Google would thrive.

 

Put differently, if you love technology — if you always buy the latest gadgets and think scientific advances are powerful forces for good — then perhaps you ought to cheer on the antitrust prosecutors. Because there is no better method for keeping the marketplace constructive and creative than a legal system that intervenes whenever a company, no matter how beloved, grows so large as to blot out the sun. If you love Google, you should hope the government sues it for antitrust offenses — and you should hope it happens soon, because who knows what wondrous new creations are waiting patiently in the wings.

 

For the Raffs, however, it’s probably too late. By the time Vestager announced her verdict and record-setting fine last year, it had been 12 years since Adam and Shivaun started Foundem.com. During that time, their lives slowly but inexorably became devoted to battling Google. They had spent thousands of hours corresponding with regulatory agencies across the globe. They had filed a civil suit against Google in British court, a case that is ongoing. They basically shut down Foundem, creating more time for them to give advice to other companies and regulators fighting Google. This consulting work, some of which was funded by Google’s competitors, has helped to keep the Raffs afloat. And if the Raffs win their lawsuit against Google, it could be worth millions. “But it’s a different business model than we expected,” Adam told me. “It’s also deeply frustrating, because we became technologists in order to build new technologies. We never intended to be professional plaintiffs or antitrust crusaders.”

 

One of the most difficult things for the Raffs over the past decade has been figuring out how to explain this journey to themselves and others. Even friends and family didn’t fully understand what was going on. “It feels really good to be validated like this, to be told we were right,” Shivaun told me, referring to Vestager’s verdict. “But that doesn’t turn back the clock and give us another chance. Even if we win in Brussels, or win our lawsuit, in some ways, we were still defeated. We were still beaten by Google.”

 

Link to comment
Share on other sites

The company I've worked for for almost 10 years does analytics on large amounts of relatively sensitive (mostly) time series data from many sources all over the globe (industrial...ish), for a number of different purposes.  I'm not a data scientist or data engineer, rather I've mostly been in IT operations, but it is fascinating what can be done.  One of the things we do really well is the moving/cleansing/processing/merging of different types and sources of data to create a really rich/high fidelity data set.  Some of the really mundane things can be surprisingly difficult like... dealing with time zones from different sources where some move, some are stationary, etc, or dealing with even same types of data but where frames change or are inconsistent or sensors can be faulty, etc, or even just moving the data around.  And security.  And infrastructure, dealing with storage/dr/whatever on petabytes of data.

 

Either way, interested, subscribed, etc.

Link to comment
Share on other sites

This is my next career move, I'm pretty sure.  I'm in kind of a sales hybrid role right now, but the whole sales division just restructured and basically all of the work I've done for the past 3 years is being ignored, so it's time to move on.  I found I was doing some basic data analytics on my own in my previous role anyway, so I'm going to try and make the move over to our Sales and Business Operations department.  

I'm hoping my experience at my company and familiarity with our back end systems will help overcome my lack of analytics experience in their eyes.  I can learn Tableau, SQL, Salesforce Admin, etc if given the chance.  Any thoughts?  Is this folly?  Should I be going back to school or is this move doable?

Edited by Biff Tannen
Link to comment
Share on other sites

On 4/7/2018 at 6:22 PM, Biff Tannen said:

This is my next career move, I'm pretty sure.  I'm in kind of a sales hybrid role right now, but the whole sales division just restructured and basically all of the work I've done for the past 3 years is being ignored, so it's time to move on.  I found I was doing some basic data analytics on my own in my previous role anyway, so I'm going to try and make the move over to our Sales and Business Operations department.  

I'm hoping my experience at my company and familiarity with our back end systems will help overcome my lack of analytics experience in their eyes.  I can learn Tableau, SQL, Salesforce Admin, etc if given the chance.  Any thoughts?  Is this folly?  Should I be going back to school or is this move doable?

 

On 4/7/2018 at 11:08 AM, Celery Man said:

The company I've worked for for almost 10 years does analytics on large amounts of relatively sensitive (mostly) time series data from many sources all over the globe (industrial...ish), for a number of different purposes.  I'm not a data scientist or data engineer, rather I've mostly been in IT operations, but it is fascinating what can be done.  One of the things we do really well is the moving/cleansing/processing/merging of different types and sources of data to create a really rich/high fidelity data set.  Some of the really mundane things can be surprisingly difficult like... dealing with time zones from different sources where some move, some are stationary, etc, or dealing with even same types of data but where frames change or are inconsistent or sensors can be faulty, etc, or even just moving the data around.  And security.  And infrastructure, dealing with storage/dr/whatever on petabytes of data.

 

Either way, interested, subscribed, etc.

I've worked in Data Analytics for years and can provide some insight here.  

One important caveat to all the cool shit that can be done with data is that there needs to be practical application of the analysis for it to be worth anything and only around 10-15% of analysis/models/algorithms generated ever go into "production".  

I'm using quotes around production because that can mean be implemented in software, leveraged for business use, etc.  99% of the work Data Scientists/Analysts do is data munging.  It's tedious, often times boring, and will make you want to pull your fucking hair out.  So if you're cool with that, I'd say go for it.  For the high level DS jobs you'll need at least a masters in quantitative field to get into the door.  It also makes sense to focus on functional languages for programming such as scala and python vs. focusing on products like tableau and salesforce.  Both are great for their intended purposes but what happens when someone fucks up the data model under your SFDC instance and you want to run some basic regressions against it?  Shit data = shit analysis. 

You'll ultimately be a "glue guy" of sorts.  At least that's the type of mentality I try to hire.  You have to be knowledgeable about the business to talk about why the analysis was ran, why the methodology used was applicable in the circumstance, and how it can be deployed.  You really do cover the entire eco-system of the business and technology in the process.  This is just my opinion but in order for a data scientist to be successful, they need to be a jack of all trades and a master of none.  

“In God we trust; all others bring data.” - W. Edwards Deming

  • Hook 'Em 1
Link to comment
Share on other sites

On 4/6/2018 at 10:29 PM, blacklab said:

I started on a project at work using word vectors to predict what companies are looking for what products.

Word vectors are a pretty cool thing that I didn't even know existed a few weeks ago.

Check categorical N-grams algorithms.   Will break down word vectors into categorical variables that can be turned into multi-variate regressions.   That's probably the simplest way to attack that problem but requires the most maintenance as the models would have to recalibrated as performance degrades.  

You could also look into decision trees as well.

Link to comment
Share on other sites

subscribed.  this is part of my world and I never stop thinking about it.  hopefully will be getting back into it. 

the simplest form I can make it, for most English speaking non-technical executives, is that if you can apply logic to any data source, you can infer any answer to any question that may arise.

one of my friends is a director at a fairly successful analytics based think tank and we continually discuss current state, trends, direction, etc. of this industry. 

Link to comment
Share on other sites

1 hour ago, GlenFromTheMailRoom said:

 

 

You'll ultimately be a "glue guy" of sorts.  At least that's the type of mentality I try to hire.  You have to be knowledgeable about the business to talk about why the analysis was ran, why the methodology used was applicable in the circumstance, and how it can be deployed.  You really do cover the entire eco-system of the business and technology in the process.  This is just my opinion but in order for a data scientist to be successful, they need to be a jack of all trades and a master of none.  

“In God we trust; all others bring data.” - W. Edwards Deming

While I agree with 99% of this...you are better suited to be a jack of all trades and a master of one (industry).   The best analytic minds I've met are very much a business analyst first and figure out current operating models, workflows, etc. to then identify places to grow, become more efficient, and/or predict opportunities with their ability to manipulate data.   

Link to comment
Share on other sites

It is OK to scrape in the US

For those of you who don’t know, Amazon (and multiple other online retailers) had made it against their TOS to scrape customers comments in an attempt to control the availability of that type of unstructured data. Thus allowing themselves to provide services to those who use their site as a third party retail solution. This also meant that by doing so it keeps academia from using them to find the same solutions and publishing their findings in the open public domain.

 

This is a major boon to consumer behavior research and I am very thankful that this ruling came to pass. 

Link to comment
Share on other sites

  • 2 weeks later...
  • 2 weeks later...

I was just shown a Data Analystic platform we have been building that pulls information from 12 different applications. It was mind blowing considering the amount of information being processed, from the different platforms,  & presented in a meaningful way. The presenter talked about it's just the early stages and it's just about building it right now, but the machine learning & analystics they can do with the data in the future is scary good if true. 

Digitalization will be cool in the future. 

Link to comment
Share on other sites

On 5/3/2018 at 3:25 PM, Neonmoon said:

I was just shown a Data Analystic platform we have been building that pulls information from 12 different applications. It was mind blowing considering the amount of information being processed, from the different platforms,  & presented in a meaningful way. The presenter talked about it's just the early stages and it's just about building it right now, but the machine learning & analystics they can do with the data in the future is scary good if true. 

Digitalization will be cool in the future. 

 A CDC tool like attunity or stream sets plus a Kafka topology can make most of this happen.  From there you just land it in your data lake and leverage something like spark to create data frames in memory that can be the real horsepower behind the curtain.  

Ultimately the crux of AI and ML is having the right problem to solve and deploying the algortihm.  Only about 12% of analytical problems possibly have AI/ML as the answer, most of the others can be done with mult-variate regression or some other type of predictive/prescriptive analysis.  Plus you have to have someone that knows how to productionalize it.  Most of these AI/ML platforms are 90% vaporware.  Be cautiously optimistic about what sales guys show you.

Edited by GlenFromTheMailRoom
Link to comment
Share on other sites

  • 7 months later...

Lots of new things out there in the Analytics space, with Machine Learning and AI being the hot topics of the Fall.  Mining of unstructured data is still on the forefront, but applications are lagging behind techniques.

Lots of interest in Analytics on the Academic side, both within the Business Schools of the US and within Computer Science and Industrial Engineering Departments as well.

The number of Academic posting in Analytics for faculty positions are at an all time high.

I am now on the Editorial Review Board for the brand new Journal of Business Analytics

INFORMS Data Analytics Society renamed their Interfaces Journal the Journal of Applies Analytics to keep up with the changing publication marketplace.

Been a long ass semester.

Link to comment
Share on other sites

  • 7 months later...

Would like to resurrect this thread because I'm trying to self-learn the CRISP-DM methodology and Tableau after years of puttering around them. Also, I was following MapR as a company (friend was on the investment team early on) and just saw they essentially folded and sold their IP to HPE. With Hortoworks and Cloudera merged and flailing, is the Big Data space dead to the "Citizen Data miner" or whatever Gartner calls the line of business who can do basic analytics due to business-friendly products?

Link to comment
Share on other sites

  • 3 weeks later...

Over the last year I've gotten into using more of Excel's functionality and I've been relatively impressed with their features. Reporting isn't as strong as I would like but they have some decent options. Also amazing what others can get Excel to do in terms of data analytics and reporting. Just check out some youtube videos.  

Excel sucks for large data crunching but that isn't always the case.  I would like to get more into tableau but I can't find the time.

Link to comment
Share on other sites

  • 1 year later...

New semester and some interesting observations on the analytics front at a PhD granting business school.

What I was teaching to my undergrad Analytics majors 5 years ago is now being taught in a lesser dumbed down manner to business students as their second semester stat course. Old format of stat up to central limit theorem first semester and through high end regression and modeling has been replaced with combine everything up to regression in one class and then do reg, time series, descriptive data mining, Simple linear programming, simulation, and decision theory instead. Makes a stronger overall student profile for employment.

Grad students all want to take analytics courses, but most can’t really hack the math/logic. Lots of growth in applied modeling logic, not a ton of students really interested in the DB side of big data. May be we have shitty DB faculty, but the students are gravitating to learning some form of query language and then moving on  into analytics roles.

Doc students are now all actively observing any class that teaches analytics to help prepare them for the new sets of standard interview questions that are now coming up regularly in faculty interviews. This is true for both the applied quant and Information Systems doc students.

Everyone wants to learn text mining, but most can only grasp sentiment drive  approaches. Text topic extraction concepts seem to still blow most students away. Lots of interest in internal mining of corporate reporting for new populism concepts (things that are ground up concerns, not top down ideas) and they are willing or pay top dollar.

Not a ton of issues with placement for any student level as long as they can actually “talk” what analytics is and what it does. I see a major difference in placement rates vs comp Sci or data science/engineering and the main answer from recruiters is they want people that understand it and can explain it to those outside of their department.

 

Lets see how this semester goes and see what other new weird things pop up.

Link to comment
Share on other sites

16 hours ago, Laxtonto said:

New semester and some interesting observations on the analytics front at a PhD granting business school.

What I was teaching to my undergrad Analytics majors 5 years ago is now being taught in a lesser dumbed down manner to business students as their second semester stat course. Old format of stat up to central limit theorem first semester and through high end regression and modeling has been replaced with combine everything up to regression in one class and then do reg, time series, descriptive data mining, Simple linear programming, simulation, and decision theory instead. Makes a stronger overall student profile for employment.

Grad students all want to take analytics courses, but most can’t really hack the math/logic. Lots of growth in applied modeling logic, not a ton of students really interested in the DB side of big data. May be we have shitty DB faculty, but the students are gravitating to learning some form of query language and then moving on  into analytics roles.

Doc students are now all actively observing any class that teaches analytics to help prepare them for the new sets of standard interview questions that are now coming up regularly in faculty interviews. This is true for both the applied quant and Information Systems doc students.

Everyone wants to learn text mining, but most can only grasp sentiment drive  approaches. Text topic extraction concepts seem to still blow most students away. Lots of interest in internal mining of corporate reporting for new populism concepts (things that are ground up concerns, not top down ideas) and they are willing or pay top dollar.

Not a ton of issues with placement for any student level as long as they can actually “talk” what analytics is and what it does. I see a major difference in placement rates vs comp Sci or data science/engineering and the main answer from recruiters is they want people that understand it and can explain it to those outside of their department.

 

Lets see how this semester goes and see what other new weird things pop up.

If they want to be "ahead" of the curve, they should be checking out Network Reinforcement Learning and Graph Theory.  

  • Hook 'Em 1
Link to comment
Share on other sites

Dumb question amnesty

What would you recommend for someone wanting to use data analytics in business? Any specific course or certificate, maybe a Google analytics certificate? I don’t want or need to build platforms or develop theorems that will be test the limits of AI/ML. I’m a dumb. Just want to learn more. 

Link to comment
Share on other sites

On 9/6/2021 at 1:08 PM, Neonmoon said:

Dumb question amnesty

What would you recommend for someone wanting to use data analytics in business? Any specific course or certificate, maybe a Google analytics certificate? I don’t want or need to build platforms or develop theorems that will be test the limits of AI/ML. I’m a dumb. Just want to learn more. 

Are you trying to just learn the logic to answer the every day questions that pop up or are you looking to do some type of application of analytics? I can probably point you in the right direction, but I need a bit more of what you are trying to do.

I come from the mindset that technically analytics does not exist (which is funny in a way due to the fact that I am heavily invested in the field practically and at the academic level), and it is just a upsell of business intelligence or decision support systems view points, or some of the other versions of this concept along the way. We can now do it faster, with more data, with more complexity than we could ever before.

Most of the concepts that we call "business analytics" can be traced back to the 1940's Management Science. In essence, most of it is true applied statistics with a mixture of computer science viewpoints, data base/data management concepts, and visualization. The true value in analytics is speed in which the complex answers to questions can found and therefore in which those answers can then be employed.

 

This is the standard intro slide deck I teach to my overarching grad data mining course (essentially welcome to analytics before we start doing specific topic major areas like visualization, or predictive or prescriptive) based within a SAS GUI -driven platform. 

Let me know what you are looking for and we can go from there.

 

 

Introduction to Data Mining and SAS Enterprise Miner.pptx

  • Hook 'Em 1
Link to comment
Share on other sites

  • 4 weeks later...
On 9/13/2021 at 9:07 AM, Laxtonto said:

Are you trying to just learn the logic to answer the every day questions that pop up or are you looking to do some type of application of analytics? I can probably point you in the right direction, but I need a bit more of what you are trying to do.

I come from the mindset that technically analytics does not exist (which is funny in a way due to the fact that I am heavily invested in the field practically and at the academic level), and it is just a upsell of business intelligence or decision support systems view points, or some of the other versions of this concept along the way. We can now do it faster, with more data, with more complexity than we could ever before.

Most of the concepts that we call "business analytics" can be traced back to the 1940's Management Science. In essence, most of it is true applied statistics with a mixture of computer science viewpoints, data base/data management concepts, and visualization. The true value in analytics is speed in which the complex answers to questions can found and therefore in which those answers can then be employed.

 

This is the standard intro slide deck I teach to my overarching grad data mining course (essentially welcome to analytics before we start doing specific topic major areas like visualization, or predictive or prescriptive) based within a SAS GUI -driven platform. 

Let me know what you are looking for and we can go from there.

 

 

Introduction to Data Mining and SAS Enterprise Miner.pptx 5.04 MB · 8 downloads

Do you mind if i ask where you teach? I am half way through the Data Science masters program at Northwestern. There are so many garbage programs available it’s hard to tell what is good and what isn’t. The program has a heavy math requirement and the core classes cover just about every topic. So far it feels like i am paying for a worthwhile degree. 

Link to comment
Share on other sites

  • 3 months later...

I've broadly been in "industrial analytics" my whole career. Up until early pandemic, that was in aviation, supporting and building tools to clean, warehouse, and analyze operational aircraft data ("blackbox" data). That was super interesting, from a lot of different perspectives. The amount of... work involved on just the ETL side for aircraft data if you want it sparkly clean loaded into system ready to be analyzed alongside different aircraft/fleets/etc, frequently by citizen data scientists, just a crazy amount of evolution to get to a state where that is possible. The number of sensors on an aircraft, if something gets busted and it's not critical it's going to stay busted for a while, sensors that inherently have some whacky readings when they are connected to a flying metal tube, the difference in the dataframes between even the same fleet or recorders, marrying the data from the aircraft to weather data and traffic data from various reporting agencies, on and on - a whole lot of stuff to deal with there. You can imagine even for something basic like "fuel", it's a wire measuring a liquid in a flying tube. Maybe there are a few wires in the tank and it can do some smart stuff to give you a decent reading, probably not. So maybe if you're just trying to do some quick analysis where one of the parameters is the amount of fuel in the tank during the flight, the basic "fuel" parameter that you pull up is using a straight reading from the sensor if the plane is in cruise (which is determined based on data) and hasn't pitched or banked in some period of time but if it's in ascent/descent/takeoff/landing it's doing some complicated math based on the reading, previous readings, vertical acceleration, etc to give you a corrected number for the parameter. Just tons of shit like that, even before you get into the whole math engine and complexity when it comes to defining events.

One customer presentation I loved was an operator in Europe what was doing a study on how to minimize blown tires on landing (which of course becomes a big safety concern as well as an operational issue, costs a ton of money whenever a plane is on the ground longer than it needs to be). They identified flights where the tire blew on a landing and dug into the data. They theorized based on the data that a culprit is a certain part that becomes worn and creates friction. It's difficult to check this part all the time, it's not straightforward enough to just swap it out based on number of takeoffs/landings/what have you. But you could tell that this is what was going on frequently with blown tires on landing because in the takeoff phase, one wheel in the landing gear would decelerate quicker than the other once the plane lifted off the ground. It would gradually start doing that when the part needed to be replaced, and if it wasn't caught quickly enough, you would eventually get a blown tire. So, they created an event in the system that would check every flight (data wirelessly dumped into the system after every flight) for instances where, after the plane was off the ground, one wheel slows down quicker than the other. Whenever this happened, it would fire off automated alerts to safety and maintenance, and the next time the plane landed they'd have a maintenance crew ready with the part. And then over time, blown tires and associated risks of runway overruns/safety events/etc decreased by some meaningful percentage. It was a really cool use of operational data.

I'm no longer in aviation - have been with a new company in biotech building industrial analytics and automation tools connected to enzymatic processes mostly in agriculture. It's maybe not as inherently cool as the safety stuff with aviation, but it is a really interesting change building stuff like this in a much less data rich environment. Less data rich, but more feasible to explore new instrumentation and get more hands on with where and how you extract data from the process. It's also cool getting to work with much smaller, much newer, much lighter products. Cloud bill at the old place was near $1mm annually, like driving a container ship (bad analogy - we couldn't start rearchitecting with containerization until we finally went through and got rid of all of the use of DCOM in the system) vs a speedboat.

Link to comment
Share on other sites

The fun part is when the system acts completely different day to day and all of the data you can gather says there isn't any appreciable difference. Obviously there is, you just can't measure it. So now you gotta go trying to figure out what might be going on and try to figure out how to measure it without spending so much money that the bean counters get upset.

  • Hook 'Em 1
Link to comment
Share on other sites

On 10/6/2021 at 6:25 PM, Noozak said:

Do you mind if i ask where you teach? I am half way through the Data Science masters program at Northwestern. There are so many garbage programs available it’s hard to tell what is good and what isn’t. The program has a heavy math requirement and the core classes cover just about every topic. So far it feels like i am paying for a worthwhile degree. 

Holy shit I wished I’d have seen this thread three years ago when I’d just finished the remote Masters program there…it was called Predictive Analytics then but probably renamed by now as that term seems to have moved onto the next buzzword like data science or machine learning.  
 

What stage are you in?  I graduated in 2017, and the program was very R-heavy which I regret as Python has taken over as a much more user-friendly, general programming language over the sometimes-nasty R which behaves much more like a calculator scripting language and is hard to integrate or scale.  
 

The hardest and most rewarding course I took was with Dr Bhatti, maybe it was 422.  we branched into much more applicable topics such as boosted trees and and legit machine learning approaches like neural nets.  He was my favorite professor by far…tough but fair, engaged and expected you to learn the material ahead of time like a true graduate student.  
 

The program overall came at a bad time as it was already behind the progression towards cloud-based, production-focused, scalable techniques that let you crunch much more data than that baby csv file of Iowa housing prices they we explored early in the program.
 

Sadly I never really got to apply much of my learnings in data science in my role as I’m much more of a program manager and executive communicator than legit data scientist. I’ve forgotten most of the theory and details, but my main takeaways working on the fringes of data science are:

1) Most people don’t spend nearly enough time applying the scientific method to really define the problem, break it down into testable hypotheses and applying the appropriate technique given the data available.   They want answers…now…not to really define the problem.

2)  There rarely is a magic answer that’ll solve all your ills in business.  You have to be multi-disciplinary in fully understanding how the data you are using is generated, what it reflects, how data science applies their workflows to it, framing results in those contexts and applying decision science with the limitations in mind to move forward.  
 

3) My jobs main focus is now in data engineering, as it’s the foundation of any applicable uses of the data…reporting, baseline analytics, predictions, production applications and effective usage of it.  We grew our data scale too quickly and a mix of legacy systems, lack of comprehensive data quality programs, automation and wild mix of volume, velocity and veracity make for a lot of cleaning up ETL processes.  
 

4) Therefore my team is implementing ‘Data Ops’ this fiscal year as a culture, tools and workflow design ethos to treat data engineering like a true factory process…lean dev methodologies, Six Sigma continuous improvement and tightly integrated production stacks.  Azure is our preferred platform as we are an MS shop so DevOps, Azure AD, Data Factory / Azure SQL / Databricks, Power Suite are all tightly integrated.  Google Data Ops for more info.  
 

5) Leadership needs patience.  For every successful implementation of a model there are a dozen that didn’t prove fruitful.  Nothing works like the movies.  Your data scientists need clean, raw data with knowledge of the business for feature engineering whereas analysts/reporting need simple structured summarized data to be effective at a lower skill level.  
 

Sorry for the novel.  I hope the NU program has advanced since I went through it because the world has changed rapidly, using R on a 100k-row CSV file won’t cut it anymore.  

Link to comment
Share on other sites

7 hours ago, Homercles said:

Holy shit I wished I’d have seen this thread three years ago when I’d just finished the remote Masters program there…it was called Predictive Analytics then but probably renamed by now as that term seems to have moved onto the next buzzword like data science or machine learning.  
 

What stage are you in?  I graduated in 2017, and the program was very R-heavy which I regret as Python has taken over as a much more user-friendly, general programming language over the sometimes-nasty R which behaves much more like a calculator scripting language and is hard to integrate or scale.  
 

The hardest and most rewarding course I took was with Dr Bhatti, maybe it was 422.  we branched into much more applicable topics such as boosted trees and and legit machine learning approaches like neural nets.  He was my favorite professor by far…tough but fair, engaged and expected you to learn the material ahead of time like a true graduate student.  
 

The program overall came at a bad time as it was already behind the progression towards cloud-based, production-focused, scalable techniques that let you crunch much more data than that baby csv file of Iowa housing prices they we explored early in the program.
 

Sadly I never really got to apply much of my learnings in data science in my role as I’m much more of a program manager and executive communicator than legit data scientist. I’ve forgotten most of the theory and details, but my main takeaways working on the fringes of data science are:

1) Most people don’t spend nearly enough time applying the scientific method to really define the problem, break it down into testable hypotheses and applying the appropriate technique given the data available.   They want answers…now…not to really define the problem.

2)  There rarely is a magic answer that’ll solve all your ills in business.  You have to be multi-disciplinary in fully understanding how the data you are using is generated, what it reflects, how data science applies their workflows to it, framing results in those contexts and applying decision science with the limitations in mind to move forward.  
 

3) My jobs main focus is now in data engineering, as it’s the foundation of any applicable uses of the data…reporting, baseline analytics, predictions, production applications and effective usage of it.  We grew our data scale too quickly and a mix of legacy systems, lack of comprehensive data quality programs, automation and wild mix of volume, velocity and veracity make for a lot of cleaning up ETL processes.  
 

4) Therefore my team is implementing ‘Data Ops’ this fiscal year as a culture, tools and workflow design ethos to treat data engineering like a true factory process…lean dev methodologies, Six Sigma continuous improvement and tightly integrated production stacks.  Azure is our preferred platform as we are an MS shop so DevOps, Azure AD, Data Factory / Azure SQL / Databricks, Power Suite are all tightly integrated.  Google Data Ops for more info.  
 

5) Leadership needs patience.  For every successful implementation of a model there are a dozen that didn’t prove fruitful.  Nothing works like the movies.  Your data scientists need clean, raw data with knowledge of the business for feature engineering whereas analysts/reporting need simple structured summarized data to be effective at a lower skill level.  
 

Sorry for the novel.  I hope the NU program has advanced since I went through it because the world has changed rapidly, using R on a 100k-row CSV file won’t cut it anymore.  

Yeah they renamed it Master's of Science in Data Science now - most of the class designations still carry the old predictive analytics nomenclature in the class codes. I have 3 courses left until I finish so I will be done later this year. It's about half and half these days which classes utilize python and those that require R. The data engineering specialization is almost entirely done in Python. I was managing an engineering team but took a job at a start up that required I manage data engineers and data scientists. I needed to know more about what they do and communicate those results to leadership. I haven't taken a class with Bhatti yet but he is definitely still around. I won't ever really be a data scientist but so far I feel like I am learning enough to have intelligent conversations and ask the right questions of my employees and the models they are building. 

I love your first two points. That is the one great thing the program reinforces. Data Science should follow a scientific process and there is no perfect model that solves everything you are attempting to do. 

The start up I work for has decades of cleaned and high quality real estate data. It was all originally built for the founder so he could find off market deals. He realized he could slap a better UI on it, throw some marketing tools in and he has a platform for other real estate investors. Now he wants to build all sorts of analytics capabilities on it to make it more sophisticated and help buyers find deals faster and ultimately make more money. We just started exploring what we can do so who knows where it will end up. 

  • Like 1
Link to comment
Share on other sites

  • 8 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...