Seen on prominent benefits company’s site regarding Quicken capabilities:
“Our website is not compatible with Quicken however, our software apparently communicates with Mint.com more easily, and the feeling is that Mint.com is what Intuit (who also does Quicken) is focusing more on as the future, so our developers and management believe that compatibility with Mint.com makes more sense.”
“Therefore, if you care to use Mint.com, you will find that our website does work with it for pulling information.”
I can’t remember a term so over-hyped in my time in tech as “big data.” But the shift in technologies surrounding data and analytics is very real—and provides true value to organizations who can harness seas of bits to reveal important insights and new knowledge. Here’s the way I think about the evolution towards large data analysis.
The Big Data Card Catalog
The analogy I like to use is a library—the kind where you check out books. Traditional business intelligence was like the card catalog. As new books arrived, data about the books was summarized, aggregated, and put in a big BI-like filing system. Because databases were traditionally expensive and overtapped, it made tons of sense to pull summary data, put it in a BI “card catalog,” and operate on the data. How many books do we have? How many are mysteries? What is the average-page-count kind of stuff. We make dashboards and reports about the status of books, how many are coming and going, and which get checked out the most. All valuable stuff.
The Shift to Detail – Words on Pages
But soon, people wanted to actually get at the data itself: the pages and the words in ALL the books. It was great to operate on the summaries, but the next level of value was found by analyzing the detail. Were books with more adjectives more or less apt to be checked out? Could we analyze word patterns and understand better what made a really desirable book? Could we cohort books by analyzing the detail more effectively than the human-generated card catalog was doing? This first tier of big data was well served by a new era in analytic databases—scale-out, MPP, column-store, and in-memory made doing this kind of analysis inexpensive and fast. The funny thing is, there weren’t many tools built to go on top of these systems, to allow analysts and end users the ability to really explore these immense and complex datasets. So analysts went back to hand-coding SQL or writing Map Reduce jobs, and building proprietary discovery tools that could harness the power of these new data systems.
The Big Stuff Generated by Machines
So in the library analogy, what could be bigger than analyzing all the words in all the books? That’s where big data moved next. Rather than limit ourselves to the books, what if we extended the analogy into the reader’s realm? What if we started to capture not just the actual books, but all the events surrounding how people used the books? The sphere gets big pretty fast. And results in another giant increase in data size: For every book, how many times was page 54 read? Did people skip to the end or start at the beginning? On average, how long did people read the book in each sitting? All of this data dwarfs the size of the actual book—it could easily be 100 times bigger. Multiply that by all the books and readers in the library and we have a really large bucket of bits. This is the world of machine-generated big data – event logs and click streams.
Where’s the Value in Big Data?
The most important point is that there’s value at every level. And in my humble opinion, we tech professionals have recently focused on machine-generated “big data” to the exclusion of much better access and discovery into data at every level. We have separate big data systems, different transactional systems, and different tools to peer into each. The real value moving forward will be providing analysis across all of this data, big or small.
[Excellent Final Paragraph!!!]
When we join event data with transaction data, we can get answers to some really interesting questions. We could, for example, cohort readers by age (user database), correlating groups who check out books that frequently mention “attention deficit disorder” (detail data analysis) with whether they always skip to the back 5% of the book to see how the story ends (event data analysis).
303 redirects are a redirect-type that will not pass any SEO value or “Juice” and will not remove pages out of a search engine index even if the pages are deleted from a domain’s server. This type of redirect is temporary and easily misinterpreted or misunderstood by older search engines.
Methodology for Handling 303 Redirects
The response to the request can be found under a different URL and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URL is not a substitute reference for the originally requested resource. The 303 response Must NOT be cached, but the response to the second (redirected) request might be cacheable.
Developed to prevent a form from being resubmitted after an HTTP Post request. As an example, a recent SouthWest Airlines issue caused customers to be charged multiple times from a deal they saw presented on their Facebook page. Once the user purchased a ticket, a type of redirect loop came into play, causing multiple credit card charges. 303 redirects are the exact type of redirect to use in this situation to avoid potential problems with technology. Additionally, the response the browser and engines receives comes from the CMS, not the server-level, which decreases page speed.
301 vs 302 Redirect Over the years the 303 error has become outdated and replaced with the 302 redirect type. The 302 redirect, similar to the 303, also provides no SEO value to the pages involved and will not remove old pages from the search results.
303 redirects – Our Recommendations:
If you use 303 redirects, STOP right now and have someone help you setup the correct, most relevant redirects for your site.
User Acceptance Testing (UAT) is a beast, we all know that, and during this process business analysts execute numerous UAT tasks based on the type of projects, duration and organizational standards.
Solution Validation: Validate that solution meets the Business Requirements
Verify the Organization Readiness: BA should make sure that the end user is ready to use the solution, by checking that the required resources along with relevant tools and training are created and made available
Identification & Validation of Test Case Scenarios: BA should identify test case scenarios that will be tested in UAT Phase and get those scenarios validated by end user
Create Training Plan: BA should create training plan for the engagement of required resources
Create UAT Plan: BA should publish the UAT plan for the required resources
Conduct UAT: UAT should be conducted keeping in mind the objective of UAT, which is to “make sure that solution fulfills the day-to-day transaction of business along with any other known exceptions”
Record the Results:UAT tasks can only be effective if issues are logged religiously
UAT Feedback: BA should confirm from user that solution fulfills the business needs as anticipated by user and update the feedback to related stakeholders
Conduct UAT Signoff (Approval to GO LIVE)
Upcoming Additions to this Article about UAT tasks:
How to Adopt RTC EE – Part 1 “Technology” by Kristin Cowhey, VP Sales and Marketing at PacGenesis
Excerpt: “If you’re thinking of adopting RTC EE within your organization, you’re likely already aware that it can be a daunting task. Whether you’re evaluating RTC as a possible solution or looking to grow your existing RTC implementation from distributed…”
If it was an effective meeting you took good notes…or is it because you took notes, it was an effective meeting?
Either way, good note taking skills are imperative.
Have you ever been in a meeting or conference call where no one did what they promised to accomplish at the previous one? Due to the fact that no improvement has been produced since the last meeting, there’s little to go over, and people lose interest soon…very soon!
Momentum grinds to a halt and all you hear is radio silence.
It is a common scenario, yet it’s one which can frequently be avoided by just ensuring adequate notes are usually circulated promptly before and after every meeting. This allows the dialogue to still be fresh in people’s minds because everyone has a very clear reminder of what they have to do or for what they are responsible.
I have been using the same note taking template since 2002, when it was shared with me by a man named Bill Whitley who was the owner of my company at the time.
I was recently reading a white paper* by SuccessFactors titled: “Doing the Right Things: Using Goal Management to Drive Business Execution.”1
A section that really stood out and is worthy of sharing discusses the difference between goals versus tasks.
As an obsessive list maker who hasn’t completed a list in umpteen many years, this topic obviously caught my attention. The basic premise is that employees should have no more than 10 goals. When an employee has more than 10 goals it’s usually because they have confused goals with tasks:
Goals are outcomes, accomplishments, or responsibilities people need to fulfill to be effective in their jobs. Tasks are activities people perform to achieve these goals.
Then to bring the idea home, the paper makes a rather excellent sports analogy:
The difference between tasks and goals is like the difference between executing plays in football and scoring points. At the end of the day achieving points is what matters, not the number of plays you ran.
I was much more of a soccer player but even I related to this mainly because of how awesome it is! Thank you SuccessFactors for actually creating and providing a white paper that educated me AND captured my short attention span at the same time.
1 “Doing the Right Things: Using Goal Management to Drive Business Execution.” SuccessFactors. 2012. Download White Paper.
Company: mSevenSoftware Website: mSecure for Android Product Page Description: mSecure for Android is the leading password manager and digital wallet in the Google Play Store. mSecure has a premium Android look and feel with features like collapsible section headers, search, sort and auto-login assist.
I have been using DataVault well, ever since it was first released, when it was just a desktop tool. I have been loyal to DataVault but decided to switch to mSecure about 1 month ago. I have always looked for a better password manager, trying and testing just about all the “top” tools on the app market. If you haven’t figured it out already, I’m picky, very picky and know what I want and mSecure had it. My “Basic” Requirements in a Password Manager:
Entries must have unlimited fields
I like icons, the more the better (thanks mSecure!)
Password required to access the app
Good organization of Categories with options for different views
Customization, customization, customization
Ability to sync with DropBox and other things like this
mSecure is hands down my new password manager. The only thing I would add to a wish list for the tool is the ability to import DataVault. Besides, I’ve drank the Kool-Aid and it is good!
Picture the scenario: You’re the BA on an implementation project. Everything is going well and milestones are being met. All stakeholders are happy.
Then, all of a sudden, a stakeholders tells you that the system just HAS to do something that was never planned for in the Requirements. It just HAS to have a field to track a customer’s credit status. It HAS to have a maker-checker approval workflow for modifying transactions. It HAS to have the ability to attach documents to a customer’s record.
The list goes on…you’re flooded with numerous change requests and you realize you have serious scope creep issues. Scope creep is one of the most common things in projects and to be honest, I did not realize how detrimental it was at first. Aside from the fact that it distracts everyone from delivering the base outcomes of a project, takes away hours and hours of our time as BAs and drives your PM and Program Director insane.
I am breaking this post into two parts to open up discussion on my strategies for controlling scope creep. My point of view is that of a Business Analyst but you can apply these ideas to any sort of project regardless of your role with the project.
Over-Communicate to Users
One of THE most effective and undeniable ways to control scope creep is through over-communication with your stakeholders and users. I’ve found that when people are continuously involved and kept up-to-date on the status of a project, they will be more amenable to ideas and will provide necessary buy-in when you need it.
Just picture yourself entrenched in a project that you’ve been updating a user regularly every two days about the project’s status, issues you are facing and the progression of the timeline. If, out of the blue, there is an unexpected “enhancement item” requested/required by this user, it is going to be a great deal easier for you to negotiate and request, for example, a rescheduling of said enhancement.
Compare this scenario to a situation in which you have not communicated with the user, at all, during the project. That user, mark my words, is going to be unyielding when pushing their enhancement through.
Documentation, Documentation, Documentation
Document EVERYTHING to help control scope creep. As a project manager, there are countless discussions and agreements that occur outside of official meetings.
NEVER agree to anything outside of an official meeting. Always minute and document agreements and decision points though a formal meeting. That way, if a user disputes something was signed off and agreed, you can refer to the meeting minutes as evidence.
People forget about things. You have to have documentary evidence that things were agreed on knowing that people forget things.