Upcoming Events
Event Date Location

Video Insider Summit

09/14/2014 - 09/17/2014 Montauk NY

Ad Age Digital Conference San Francisco

09/16/2014 San Francisco CA

Ad Age CMO Strategy Summit

09/17/2014 San Francisco CA

CSO Perspectives on Defending Against the Pervasive Attacker

09/17/2014 Boston MA

IT Roadmap Conference & Expo

09/17/2014 San Jose CA

CIO Perspectives Chicago 

09/18/2014 Chicago IL

 CSO Perspectives on Data Protection and Privacy

09/23/2014 San francisco CA

OMMA Premium Display @ Advertising Week

09/30/2014 New York NY

OMMA RTB (Real-Time Buying) @ Advertising Week

10/02/2014 New York NY

The Hub Brand Experience Symposium

10/07/2014 - 10/08/2014 New York NY

tech-business-marketing

Subscribe To Latest Posts
Subscribe

3 mistaken assumptions about what Big Data can do for you

CITEworld

Big data is certainly all the rage. The Wall Street Journal recently ran a piece ondata scientists commanding up to $300,000 per year with very little experience. Clearly the era of embracing big data is here.

However, since the tools and best practices in this area are so novel, it’s important to revisit our assumptions about what big data can do for us – and, perhaps more importantly, what it can’t do. Here are three commonly held yetmistaken assumptions about what big data can do for you and your business.

Big Data Can’t Predict the Future

Big data – and all of its analysis tools, commentary, science experiments and visualizations – can’t tell you what will happen in the future. Why? The data you collect comes entirely from the past. We’ve yet to reach the point at which we can collect data points and values from the future.

We can analyze what happened in the past and try to draw trends between actions and decision points and their consequences, based on the data, and we might use that to guess that under similar circumstances, if a similar decision were made, similar outcomes would occur as a result. But we can’t predict the future.

Many executives and organizations attempt to glean the future out of a mass of data. This is a bad idea, because the future is always changing. You know how financial advisers always use the line, “Past performance does not guarantee future results?” This maxim applies to big data as well.

Instead of trying to predict the future, use big data to optimize and enhance what’s currently true. Look at something that’s happening now and constructively improve upon the outcomes for that current event. Use the data to find the right questions to ask. Don’t try to use big data as a crystal ball.

Big Data Can’t Replace Your Values – or Your Company’s

Big data is a poor substitute for values – those mores and standards by which you live your life and your company endeavors to operate. Your choices on substantive issues may be more crystallized, and it may be easier and clearer to sort out the advantages and disadvantages of various courses of action, but the data itself can’t help you interpret how certain decisions stack up against the standards you set for yourself and for your company.

Data can paint all sorts of pictures, both in the numbers themselves and through the aid of visualization software. Your staff can create many projected scenarios about any given issue, but those results are simply that – a projection. Your job as an executive, and as a CIO making these sorts of tools and staff available within your business, is to actually reconcile that data against your company’s values.

For instance, imagine you’re a car manufacturer. Your big data sources and tools tell you that certain vehicle models have a flaw that may cost a few cents to repair on vehicles yet to be manufactured, but would cost significantly more to repair in vehicles that have already been purchased by customers and are in production use. The data, and thus your data scientists on staff, might recommend fixing the issue on cars still on the assembly line but not bothering to fix the cars already out there in the world, simply because the data might have shown the cost exceeded the likelihood of damages across the board.

(Note that this scenario may sound familiar to you if you have been following theGeneral Motors ignition switch saga. However, this is only a hypothetical example, and further, there is no evidence big data played into the GM recall.)

Say your company has a value statement that quality is job 1 and safety is of paramount importance. Though the data suggests a recall isn’t worth it, you make the call as an executive to start the recall. You’re informed, but you’re not controlled by big data.

Above all, it’s vital to remember that sometimes the right answer appears to be the wrong one when viewed through a different lens. Make sure you use the right lens.

Read more…

Shadow cloud services pose a growing risk to enterprises

IDG News Service

A growing tendency by business units and workgroups to sign up for cloud services without any involvement from their IT organization creates serious risks for enterprises.

The risks from shadow cloud services include issues with data security, transaction integrity, business continuity and regulatory compliance, technology consulting firm PricewaterhouseCoopers (PwC) warned last week.

“The culture of consumerization within the enterprise — having what you want, when you want it, the way you want it, and at the price you want it — coupled with aging technologies and outdated IT models, has propelled cloud computing into favor with business units and individual users,” PwC said in a report.

Increasingly, workgroups and even individual users in companies are subscribing directly to cloud services for business reasons because it is easy and relatively inexpensive for them to do, said Cara Beston, cloud risk assurance leader at PwC.

“There is a new form of shadow IT and it is likely more pervasive across the company” than many might imagine, given the easy access to cloud services, Beston said. “It is harder to find, because it is being procured at small cost and is no longer operating within the bounds of the company.”

Some typical use cases for shadow cloud services include collaboration software, storage, customer relationship management apps and human resources.

The Software as a Service (SaaS) delivery model allows business units and workgroups to quickly deal with business process challenges without having to wait for IT to help out. The fact that the cost for such services is usually an operating expense rather than a capital expense is another advantage.

“Shadow cloud is happening under the radar” at many organizations, Beston said. Without governance, such cloud services present significant data security risks and the potential for technology and service redundancies.

Risks include inadvertent exposure of regulated data, improper access and control over protected and confidential data and intellectual property and breaching of rules pertaining to how some data should be handled.

Companies in regulated industries face a real risk of becoming non-compliant with data security and privacy obligations without even realizing it. Importantly, while many business users sign onto cloud services because of the perceived lower costs, a lack of control over how the services are being used can often result in service duplication and higher-than-anticipated operational costs, she said.

Cloud services for work groups of between five and 10 business users can range from as little as a few hundred dollars a month to a few thousand dollars. But the costs can quickly get out of control when all the different groups that might be using similar services within an organization are counted.

Continue reading…

5 Measurement Pitfalls to Avoid

Mashable

Say your goal is to increase the number of customers you serve each day. Perhaps you run a city office processing food stamp applications, or maybe you’re offering technical support for your company’s product. How many customers do you serve online, in person and over the phone? What’s the average time to resolve a problem in each of these channels? Which types of customer requests take the longest, and which can be handled expediently?

If you can’t answer these questions, you’re setting yourself up for failure before you even begin to try.

Data-driven decision making is a way of life these days, from city hall to the corporate boardroom. If you have the numbers to dictate a course of action, the thinking goes, why would you use your heart or your mind? But in the quest to back up every move with cold, hard data, it can be easy to mistake any old numbers for useful numbers. Not all data is created equal, and the best way to ensure you’ll be collecting the right data is to develop the right set of performance metrics.

So how do you decide which metrics will help you and which will just distract you from the central issues? Here are five common mistakes people make when dealing with data, and some tips to avoid them.

Mistake #1: Just having metrics is enough

It’s true that measuring a little bit is better than measuring nothing. But too many people are satisfied upon merely being able to utter the word “metrics” to a supervisor, and too many supervisors assume that if their team is counting anything at all, they must be doing something right.

Data is only useful if it allows you to measure and manage performance quality. This means it’s not necessarily as important for, say, the Buildings Department to count how many buildings passed inspection as it is for it to know the types of citations that caused them to fail, the number of inspections each inspector completed in one day, and how many buildings corrected their violations within one or two months of initial inspection. This richer set of data will reveal inefficiencies in the inspection process and allow the department to work toward better safety standards.

Mistake #2: The more metrics, the better

A common misconception is that if something can be counted, it should be counted. I’ve made the mistake of laying out tabs and tabs of metrics on a spreadsheet, only to find that the effort required to collect the data is a drain on not only my time, but the time of the people assigned to carry out the very work we’re trying to measure.

You never want your performance monitoring to be so onerous that it actually hinders performance itself. When coming up with a set of metrics, it helps to start by brainstorming everything you could possibly measure, then prioritizing the top 10 indicators that will yield the most critical information about your program. Start with a manageable load, and gradually add more — as long as the effort required to collect the data will pay for itself in useful observations and opportunities for improvement.

Mistake #3: Value judgments should be assigned to volumes

On the surface, it may seem intuitive that more calls answered is better than fewer calls answered. But imagine that in order to squeeze in an extra five calls an hour, the quality of each call is compromised. Less information is gathered, and fewer issues are addressed. Callers aren’t satisfied with the first call, so they call a second or a third time, further increasing your call numbers but taking up extra time and failing to address the reasons why the calls are coming in the first place. Perhaps calls that last a minute longer but more adequately address the caller’s questions end up preventing repeat calls, thus rendering the more-equals-better line of thinking not just mistaken, but backwards.

It’s also important to realize that many metrics, when counted as absolute numbers,aren’t particularly helpful. Without context, a number is more or less meaningless. Any numerator deserves a denominator, and pure numbers should be represented as a percentage of the total. For example, moving 1,000 homeless individuals off of the street and into temporary housing is laudable. But if the goal is to create housing for 20,000 homeless people, then it’s important to recognize that you’re only 5% of the way there.

Continue reading…

The internet of things – the next big challenge to our privacy

The Guardian

If there’s a depressing slogan for the early era of the commercial internet, it’s this: “Privacy is dead – get over it.”

For most of us, the internet is complex and opaque. Some might be vaguely aware that their personal data are getting sucked, their search histories tracked, and their digital journeys scoured.

But the current nature of online services provides few mechanisms for individuals to have oversight and control of their information, particularly across tech-vendors.

An important question is whether privacy will change as we enter the era of pervasive computing. Underpinned by the Internet of Things, pervasive computing is where technology is seamlessly embedded within the real world, intrinsically tied to the physical environment.

If the web is anything to go by, the new hyperconnected world will only make things worse for privacy. Potentially much worse.

More services and more things only mean more data being generated and exchanged. The increase in data volume and complexity might plausibly result in less control. It’s a reasonable assumption, and it leaves privacy in a rather sorry state.

Many of the future predictions about privacy reflect this bleak diagnosis. If privacy isn’t dead yet, then billions-upon-billions of chips, sensors, and wearables will seal the deal.

But before jumping to such conclusions – and bearing in mind the immense power of established tech-vendors and their interest in this space – there may still be reasons to be positive. In particular, the fundamental differences between pervasive computing and Web 2.0 provide a beacon of hope.

One difference is that with pervasive computing, much of the technology becomes tangible and familiar. This makes issues of privacy more readily apparent to users. Web browsing histories stretching back over time are one thing; Google Glass is quite another.

If you can physically witness aspects of data collection, it short-circuits what has traditionally been a long feedback loop between privacy risk and cumulative effect. The hope is that the increased awareness inspires action.

This ties to a second difference: the technology itself could enable action. Unlike the web, where offerings tend to be one-size-fits-all, pervasive computing is driven by the individual, focusing on customised, person-centric services and experiences.

If the technology supporting this properly places individuals in the driving seat, it could also be used to provide individuals with the opportunity to take control of their personal data.

Moving from the abstract web

It has taken years for the sort of awareness and backlash that we’re now starting to see against Facebook, Google, and other major internet vendors that trade in personal data.

This is a product, in many respects, of the inherent obscurity of data collection by web-based services.

Moving from the web to the Internet of Things, many aspects of technology shift from being abstract and hidden, to being grounded in the real world.

Continue reading…

Combining the Flexibility of Public-Cloud Apps with the Security of Private-Cloud Data White Paper

CITEworld

Cloud applications are a priority for every business – the technology is flexible, easy-to-use, and offers compelling economic benefits to the enterprise. The challenge is that cloud applications increase the potential for corporate data to leak, raising compliance and security concerns for IT. A primary security concern facing organizations moving to the cloud is how to secure and control access to data saved in cloud applications.

This white paper explores technologies that combine the flexibility of public cloud apps like Salesforce and Box, with the security and compliance of a private cloud. When deployed as part of an end-to-end data protection program, such an approach can provide the same security and assurances as can be achieved with premises-based applications.

Comprehensive Data Protection in the Cloud

In today’s business, IT may no longer own or manage the apps, the devices, or the underlying network infrastructure, yet is still responsible for securing sensitive corporate data. While cloud application vendors secure their infrastructure, the security of the data remains the responsibility of the customer using the application. A comprehensive approach to data security in cloud environments covers the full lifecycle of data in an organization—in the cloud, on the device, and at the point of access.

•In the Cloud—Most cloud apps don’t encrypt data-at-rest, and those that do encrypt manage the keys themselves. For organizations in regulated industries and/or with sensitive data stored in these apps, the ability to maintain confidentiality of corporate data remains unsolved.

•At Access—Cloud apps provide limited access control, data leakage prevention, and visibility when compared with applications hosted on premises. This makes it difficult to control who, what, where, and when employees access cloud applications.

•On the Device—Since cloud applications can be accessed from any device, anywhere, a comprehensive security solution should include protection for cloud application data on client devices such as laptops, tablets and smartphones.

Click here to view the full white paper

 

The Rise of Cloud in the Channel

IDC PMS4colorversion 1 300x99 The Rise of Cloud in the Channel

Cloud services represent a growing opportunity for partners of all types in a wide array of activities across resale, services, and development. However, it’s of key importance that partners have an understanding of the what, where, how, and why of cloud services prior to embarking on wholesale business strategy change.

This IDC study, commissioned by Microsoft, examines the implications of becoming a successful cloud partner in 2013. Developed with insight garnered through in-depth conversations with leading Microsoft cloud partners and backed by supportive survey data (see methodology for further details), it provides a profile of the potential upside of integrating cloud to a partner’s mix of solution offerings.Finally, it concludes with guidance as a partner begins, or continues, their journey into the cloud.

the rise of the cloud in the channel The Rise of Cloud in the Channel

Computerworld Recognizes Organizations Achieving Business Benefits through Big Data with Data+ Editors’ Choice Awards

 Computerworld Recognizes Organizations Achieving Business Benefits through Big Data with Data+ Editors’ Choice Awards

IDG Enterprise—the leading enterprise technology media company composed of Computerworld, InfoWorld, Network World, CIO, DEMO, CSO, ITworld, CFOworld and CITEworld—announces the 2014 Computerworld Data+ Editors’ Choice Award honorees. Recognizing 20 innovative big data initiatives that have delivered significant business value, the awards ceremony will take place at the Data+ conference being held September 7-9, 2014 at the Hyatt Regency in Phoenix, Arizona.

“We are pleased to announce the 2014 Data+ Editors’ Choice Awards honorees,” said Scot Finnie, editor in chief, Computerworld. “This year’s honorees have clearly demonstrated how their innovative strategies use data and analytics to make better business decisions, streamline processes and, in some cases, generate new revenue by tapping into new markets and/or creating ancillary data-based services.”

In addition to recognizing the Data+ Editors’ Choice Awards honorees, the Data+ conference will cover key technology topics involved in a data strategy, from making data available quickly, efficiently and affordably to cleansing and connecting it to selected analytics and visualization tools, then driving new business insights and products from those efforts. The Data+ Editors’ Choice honorees will join business leaders and IT decision-maker peers at the Data+ conference. The full conference agenda can be viewed here: Data+ conference agenda.

“The Data+ Editors’ Choice Awards honorees are not only innovative in their use of big data analytics, but also show real-world results and help establish best practices for other IT practitioners in a rapidly expanding technology area,” said Adam Dennison, SVP, publisher, IDG Enterprise. “It’s exciting to honor organizations that are effectively using data to predict business trends and monetize this information. We look forward to hearing more from these organizations as they lead discussions and share case studies with attendees.”

2014 Data+ Editors’ Choice Award Honorees:

  • AstraZeneca
  • Blue Cross Blue Shield of Tennessee
  • Center for Tropical Agriculture
  • Cisco
  • Colorado Department of Public Safety (Division of Homeland Security & Emergency Management)
  • Emory University
  • Google
  • HealthTrust Technology Innovation (Division of HCA Information Technology & Services)
  • Idaho National Laboratory
  • Intel Corporation
  • Keller Williams Realty
  • Kennesaw State University
  • Kisters
  • Los Angeles Clearinghouse
  • Merck & Co.
  • Persistent Systems
  • Point Defiance Zoo & Aquarium
  • Shine Technologies
  • Texas Children’s Hospital
  • ThomsonReuters

The Data+ Editors’ Choice Awards honorees and their achievements will also be highlighted in a special September feature on Computerworld.com.

Sponsors
Current Data+ sponsors include: Information Builders, Neudesic, Saxon Global Inc.,ThoughtSpot Inc., and TIBCO Software Inc.For more information regarding sponsorship opportunities, please contact Adam Dennison, SVP, publisher, IDG Enterprise atadennison@idgenterprise.com.

Registration Information
To learn more about the conference, view the agenda, or to register visit:www.dataplusconference.com, call 800.355.0246 or email seminars@nww.com.

About Computerworld’s Data+ Editor’s Choice Awards
The Computerworld Data+ Editors’ Choice awards program was launched in 2013 by IDG’s Computerworld editorial team to recognize organizations that are mining big data to analyze and predict business trends and monetize this information. Organizations were asked to complete questionnaires detailing their big data projects, which were then reviewed by the Computerworld editorial team. From those questionnaires, honorees were selected for their ability to achieve business benefits through big data, and demonstrate real-world results and best practices. View the 2013 winners on Computerworld.com.

About IDG Enterprise
IDG Enterprise, an International Data Group (IDG) company, brings together the leading editorial brands (Computerworld, InfoWorld, Network World, CIO, CSO, ITworld, CFOworld and CITEworld) to serve the information needs of our technology and security-focused audiences.  As the premier hi-tech B2B media company, we leverage the strengths of our premium owned and operated brands, while simultaneously harnessing their collective reach and audience affinity. We provide market leadership and converged marketing solutions for our customers to engage IT and security decision-makers across our portfolio of award-winning websites, events, magazines, products and services. IDG’s DEMO conferences provide a platform for today’s most innovative and eye-opening technologies to publically launch their solutions.

Company information is available at www.idgenterprise.com
Follow IDG Enterprise on Twitter: @IDGEnterprise #DataPlus
Join IDG Enterprise on LinkedIn
Like IDG Enterprise on Facebook: www.facebook.com/IDG.Enterprise

###

Contact
Whitney Cwirka
Marketing Specialist
IDG Enterprise
wcwirka@idgenterprise.com
Office: 508.935.4414

Twitter Is Cracking Down On Companies That Provide Stats About Its Users

Business Insider

Twitter has taken the unusual step of shutting off its datapipe to certain companies that have published their own stats on how big Twitter’s user base really is, according to two sources.

The move comes after Twitter’s stock was hammered in the early part of the year when investors discovered growth in monthly active users (MAUs) was slowing or stagnant, and that measures of engagement per user were on the decline.

Since then, Twitter CEO Dick Costolo has ordered a revamp of the Twitter user interface in order to make it easier and more attractive for people to use. He also reshuffled his management ranks, getting rid of a COO with largely financial background and replacing him with a product chief from Google.

At the same time, Twitter’s stock price rose nicely. Some analysts see it hitting $60 a share (see disclosure below).

But third-party companies that published their own measures of Twitter’s user base were a thorn in Twitter’s side. While Costolo touted the company’s growth to 255 million MAUs, Business Insider was able to report that the number was only a fraction of the 1 billion people who had tried Twitter.

Most people who sign up for Twitter abandon it, it seems. Also, most people on Twitter don’t tweet, according to third-party apps that accessed Twitter’s data firehose.

Now, companies that used to provide that data have been axed from Twitter’s application programming interface (API), the firehose of data that software development companies can plug into in order to build useful products for Twitter and its users.

Twitter declined to comment when reached by Business Insider.

We don’t know why Twitter has begun culling developers from its API, but one theory might be that it has nothing to do with wanting to restrict who sees user data. Rather, Twitter has been slowly building a very nice data business of its own, which will probably book $100 million in revenue this year. The company may simply have decided it is time to end the free ride for developers who give away for free what Twitter would rather charge for.

“They shut me down last Friday night after the market closed,” one developer told Business Insider.

 

 

Why Facebook’s user experiment is bad news for businesses

CITEworld

The big data problem isn’t just about handling petabytes of information, or asking the right question, or avoiding false correlations (like understanding that just because more people drown at the same time as more ice cream is eaten, banning ice cream won’t reduce drownings).

It’s also about handling data responsibly. And so far, we’re not doing as well with that as we could be.

First Target worked out how to tell if you’re pregnant before your family does and decided to disguise its creepy marketing by mixing in irrelevant coupons with the baby offers. Then Facebook did research to find out if good news makes you depressed by showing some people more bad news and discovered that no, we’re generous enough to respond to positive posts with more positivity.

But if companies keep using the information about us in creepy ways instead of responsible ones, maybe we’ll stop being generous enough to share it. And that could mean we lose out on more efficient transport, cleaner cities and cheaper power, detecting dangerous drug interactions and the onset of depression — and hundreds of other advances we can get by applying machine learning to big data.

It’s time for a big data code of conduct.

Facebook’s dubious research is problematic for lots of reasons. For one thing, Facebook’s policy on what it would do with your data didn’t mention research until four months after it conducted the experiment. Facebook’s response was essentially to say that “everyone does it” and “we don’t have to call it research if it’s about making the service better” and other weasel-worded corporate comments. And the researcher’s apology was more about having caused anxiety by explaining the research badly than about having manipulated what appeared in timelines, because Facebook is manipulating what you see in your timeline all the time. Of course, that’s usually to make things better, not to see what your Pavlovian reaction to positive or negative updates is. The fact that Facebook can’t see that one is optimizing information and the other is treating users as lab rats — and that the difference is important — says that Facebook needs a far better ethics policy on how it mines user data for research.

Plus, Facebook has enough data that it shouldn’t have needed to manipulate the timelines in the first place; if its sentiment analysis was good enough to tell the difference between positive and negative posts (which is doubtful given how basic it was and how poor sentiment analysis tools are at detecting sarcasm), it should have been able to find users who were already seeing more positive or more negative updates than most users and simply track how positive or negative their posts were afterwards. When you have a hypothesis, you experiment on your data, not your users.

That’s how Eric Horvitz at Microsoft Research has run experiments to detect whether you’re likely to get depression, whether two drugs are interacting badly, whether a cholera epidemic is about to happen, and whether people are getting used to cartel violence in Mexico.

Using public Twitter feeds and looking at language, how often people tweet and at what time of day and how that changes, Horvitz’s team was able to predict with 70% accuracy who was going to suffer depression (which might help people get treatment and reduce the suicide rate from depression). Not only did they use information people were already sharing, they asked permission to look at them.

Click to read more…

Is Big Data a ‘Big Deal’ to your Company?

Network World

There’s no doubt that big data is a big deal to companies today.

The benefits of big data include greater insight into customer sentiment, improved employee productivity, smoother operations and processes, and better decision making. And it’s not just talk; a growing number of companies are taking action to implement big data projects. According to a recent survey by IDG, 49 percent of the 751 respondents say they are implementing or are likely to implement big data projects in the future, with 12 percent reporting that they have already implemented such projects.

As big data projects move from the planning to implementation stage, however, many companies are learning that they aren’t prepared for all of the changes that these projects bring. Big data by definition involves very large quantities of unstructured data in various formats that often changes in real time. Because big data encompasses so much information in so many formats that must be pulled together for analysis, it has significant impact on enterprise networks and IT infrastructures.  

Please or in order to access this content.