All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This thread is more than 2 years old.  For better chances at having more people see it, please post a new question.
Ugh. That's a pretty example of ugly data. Technically your data is a json structure with a field containing a string. That string describes another json structure but from splunk's point of view it'... See more...
Ugh. That's a pretty example of ugly data. Technically your data is a json structure with a field containing a string. That string describes another json structure but from splunk's point of view it's just a string. That makes it very inconvenient and possibly inefficient to manipulate. It would be much better if you got this from your source as some more sane format.
I don't think you can edit this page. But you can set your own page with ssoAuthFailureRedirect option so your users will be redirected to a webpage of your choice in case of sso authentication failure
Dear Splunkers...  As i was checking about the fishbuckets at the splexicon https://docs.splunk.com/Splexicon:Fishbucket this page got a link - See the detailed Splunk blog topic but that blog li... See more...
Dear Splunkers...  As i was checking about the fishbuckets at the splexicon https://docs.splunk.com/Splexicon:Fishbucket this page got a link - See the detailed Splunk blog topic but that blog link is a broken link.  (PS - on Splunk docs, at lower page, there is a comment input box to give feedbacks, but on splexicon page, no feedbacks input box !)   many of us are aware of wiki.splunk links are broken too.    shouldn't splunk do something about these broken links? shouldn't splunk do splunking on its own.. suggestions pls.  have a great weekend, best regards Sekar
There are at least two separate apps for "integration" with ES (haven't used either so can't help much in terms of reviewing them). But the question (not necessarily for answering here, just a food f... See more...
There are at least two separate apps for "integration" with ES (haven't used either so can't help much in terms of reviewing them). But the question (not necessarily for answering here, just a food for thought) is what do you really wanna do. Because in terms of high-level overview you have twp options: 1. Simply pull the data from ES, ingest it into Splunk and work with it as any other Splunk-indexed data. This has two drawbacks - you're getting data already pre-processed by ES and it might be in a completely different format than Splunk native addons for your source types would expect. And of course you're wasting resources (most notably storage). 2. Try to search data from your ES cluster and only do "post-processing" in Splunk. While this might work (I suppose those apps on Splunkbase aim at it) you're not using Splunk's abilities to the fullest - most importantly you're not using Splunk's map-reduce processing splitting the workload and parallelizing it if possible. So while it might be possible with one or both of those apps just as you can query a SQL database using dbconnect it is probably not something I'd do on big datasets.
Hello everyone, I hope you’re doing well. I need assistance with integrating Splunk with Elasticsearch. My goal is to pull data from Elasticsearch and send it to Splunk for analysis. I have a few q... See more...
Hello everyone, I hope you’re doing well. I need assistance with integrating Splunk with Elasticsearch. My goal is to pull data from Elasticsearch and send it to Splunk for analysis. I have a few questions on how to achieve this effectively: 1. **Integration Methods:** Are there recommended methods for integrating Splunk with Elasticsearch? 2. **Tools and Add-ons:** What tools or add-ons can be used to facilitate this integration? 3. **Setup and Configuration:** Are there specific steps or guidelines to follow for setting up this integration correctly? 4. **Examples and Guidance:** Could you provide any examples or guidance on how to configure Splunk to pull data from Elasticsearch? Any help or useful resources would be greatly appreciated. Thank you in advance for your time and assistance!    
Hi ! I am facing the same issue getting windows logs and sysmon logs but not getting any Linux and zeek logs . Using this inputs.conf file and all settings followed per documentation credneial packa... See more...
Hi ! I am facing the same issue getting windows logs and sysmon logs but not getting any Linux and zeek logs . Using this inputs.conf file and all settings followed per documentation credneial package installed sucessfully as well. Also installed Zeek Apps as well. Sorry forgot to mention that seeing hosts when do index=_internal search last 24 hours.    Any help please ? default] host = zeek-VirtualBox [monitor:///var/log/messages] disabled = 0 index = unix [monitor:///var/log/syslog] disabled = 0 index = unix [monitor:///var/log/faillog] disabled = 0 index = unix [monitor:///var/log/auth.log] disabled = 0 index = unix [monitor:///opt/zeek/log/current] disabled = 0 _TCP_ROUTING = * index = zeek sourcetype = bro:jason whitelist = \.log$
From your raw event you could do this | spath | rex field=statusMessage "\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail if you have already extracted statusMessage w... See more...
From your raw event you could do this | spath | rex field=statusMessage "\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail if you have already extracted statusMessage when the event was ingested, you can skip the first spath command
Hello @ITWhisperer  Thank you for your response.   Here is the raw data: { "messageType": "Data", "status": "Error", "statusMessage": "invalid message fields, wrong message from ds:[{\"three... See more...
Hello @ITWhisperer  Thank you for your response.   Here is the raw data: { "messageType": "Data", "status": "Error", "statusMessage": "invalid message fields, wrong message from ds:[{\"threeDSServerTransID\":\"123\",\"messageType\":\"Erro\",\"messageVersion\":\"2.2.0\",\"acsTransID\":\"123\",\"dsTransID\":\"123\",\"errorCode\":\"305\",\"errorComponent\":\"A\",\"errorDescription\":\"Transaction data not valid\",\"errorDetail\":\"No issuer found\",\"errorMessageType\":\"AReq\"}]; type[Erro] code[101] component[SERVER]" }
Response Code: 401 Response text: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN">call not properly authenticated</msg> </messages> </response> I am using Splun... See more...
Response Code: 401 Response text: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN">call not properly authenticated</msg> </messages> </response> I am using Splunk bearer token in my python program using REST API, but suddenly I got this error also I have another precisely program that using Splunk token and it works fine without get the error that I got from the other program.  I already test the token it gets 200 responses. I don't know what happens. 
This looks like JSON format data - if so, you should be extracting as JSON and using the JSON functions to manipulate the data. Please share your full event in raw format in a code block, anonymise y... See more...
This looks like JSON format data - if so, you should be extracting as JSON and using the JSON functions to manipulate the data. Please share your full event in raw format in a code block, anonymise your data as appropriate. This will enable volunteers to better guide you on a way forward.
Hello. I have a lot of events. Each event contains similar string \"errorDetail\":\"possible_value\"  Please specify how to create new field \"errorDetail\" and  stats all possible values? (There a... See more...
Hello. I have a lot of events. Each event contains similar string \"errorDetail\":\"possible_value\"  Please specify how to create new field \"errorDetail\" and  stats all possible values? (There are more than 50 kinds of errorDetail) For example: \"errorDetail\":\"acctNumber\"  \"errorDetail\":\"Message Version higher"\ \"errorDetail\":\"email\" Thank you.
Hi @jaibalaraman, Here's a static example that uses separate elements to display a Sankey-like bar: { "visualizations": { "viz_GGlMQrhz": { "type": "splunk.rectangle", ... See more...
Hi @jaibalaraman, Here's a static example that uses separate elements to display a Sankey-like bar: { "visualizations": { "viz_GGlMQrhz": { "type": "splunk.rectangle", "options": { "fillColor": "#5a4575", "strokeColor": "#5a4575" } }, "viz_sdLspBWZ": { "type": "splunk.rectangle", "options": { "fillColor": "#5a4575", "strokeColor": "#5a4575", "fillOpacity": 0.5, "strokeOpacity": 0.5 } }, "viz_G2e5COXh": { "type": "splunk.rectangle", "options": { "fillColor": "#0877a6", "strokeColor": "#0877a6" } }, "viz_izmTEXa4": { "type": "splunk.singlevalue", "options": { "backgroundColor": "transparent", "majorFontSize": 20 }, "dataSources": { "primary": "ds_zydmsUyG" } }, "viz_OBDGe1i4": { "type": "splunk.markdown", "options": { "markdown": "****Account Temporarily Locked Out (403120)****", "fontSize": "custom", "customFontSize": 20 } } }, "dataSources": { "ds_zydmsUyG": { "type": "ds.search", "options": { "query": "| stats count\n| eval count=123", "queryParameters": { "earliest": "0", "latest": "" } }, "name": "Search_1" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_GGlMQrhz", "type": "block", "position": { "x": 0, "y": 0, "w": 20, "h": 70 } }, { "item": "viz_sdLspBWZ", "type": "block", "position": { "x": 20, "y": 0, "w": 510, "h": 70 } }, { "item": "viz_G2e5COXh", "type": "block", "position": { "x": 530, "y": 0, "w": 20, "h": 70 } }, { "item": "viz_izmTEXa4", "type": "block", "position": { "x": 430, "y": 0, "w": 100, "h": 70 } }, { "item": "viz_OBDGe1i4", "type": "block", "position": { "x": 30, "y": 20, "w": 400, "h": 30 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "Sankey-like" }  
Both status values have the same cause, but different behaviors.  A deferred search will be skipped if it cannot run within the schedule window.  A continued search will run at the next opportunity.
Where do the users have this message with a return to Splunk button? Whenever the user does not have access to the Splunk Login. Is it possible to configure in SAML ? @PickleRick 
The time format actually seems to match your event. But the question is whether the event itself contains right information. You'd have to check the source system's configuration for that.
Here I am taking the TIME_FORMAT in props.conf from the eventtime field present in raw data (Toronto’s time zone is EST (UTC -5:00). Is there any changes here I need to change.
Hi @richgalloway   Thank you for your response. If a continued search could not be scheduled/started, then how is it continuous? Also, deferred search could also not be scheduled, are both not s... See more...
Hi @richgalloway   Thank you for your response. If a continued search could not be scheduled/started, then how is it continuous? Also, deferred search could also not be scheduled, are both not same?
The source itself might be simply misconfigured and using wrong timezone. If it's using time sync of some kind it shouldn't happen when the time is reported in UTC but if the time was manually set us... See more...
The source itself might be simply misconfigured and using wrong timezone. If it's using time sync of some kind it shouldn't happen when the time is reported in UTC but if the time was manually set using  wrong timezone it will be reported as wrong timestamp.
Rather than a picture, please provide your sample data as raw text in a code block (anonymised/obfuscated as necessary) to aid volunteers designing a solution for you. Also, I assume from your descr... See more...
Rather than a picture, please provide your sample data as raw text in a code block (anonymised/obfuscated as necessary) to aid volunteers designing a solution for you. Also, I assume from your description, this is all ingested as a single event. Does the event have a timestamp or does it take the ingestion time? Having extracted the fields, how do you want them displayed / reported on, e.g. all still in a single event or separate events for each location (and _time)?