All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Start by checking the logs These can also be set via the GUI $SPLUNK_HOME/bin/splunk set log-level HTTPServer -level DEBUG $SPLUNK_HOME/bin/splunk set log-level HttpInputDataHandler -level DEBUG $S... See more...
Start by checking the logs These can also be set via the GUI $SPLUNK_HOME/bin/splunk set log-level HTTPServer -level DEBUG $SPLUNK_HOME/bin/splunk set log-level HttpInputDataHandler -level DEBUG $SPLUNK_HOME/bin/splunk set log-level TcpInputProc -level DEBUG Remember to set back to WARN once you have finished debugging Then search - this may give you some clues for you to further investigate index=_internal source=*splunkd.log* (component=HttpInputDataHandler OR component=TcpInputProc OR component=HTTPServer)  
Try it this way around index=abc | mvexpand records{} |spath input=records{} | table ProcessName, message, severity, Username, Email, as Id
Suggestions made by @PickleRick  are probably best to go with.  In terms of it still not working - you will most likely need to adjust the reg-ex pattern based on your logs.
I've never come across an Splunk environment that uses dynamic IP's for indexers (might be asking for trouble) , but there may be some use cases, I don't know, probably cloud environments. Normally o... See more...
I've never come across an Splunk environment that uses dynamic IP's for indexers (might be asking for trouble) , but there may be some use cases, I don't know, probably cloud environments. Normally one would use static IP's and DNS service and name's for UF to Indexers communications. You would then configure your outputs.conf with those DNS names. The UF's have inbuilt functionality to spray portions of the data across the Indexers. So DNS may be the way forward for you.  Example outputs.conf    [tcpout] defaultGroup = my_indexers [tcpout:my_indexers] server = indexer1.example.com:9997, indexer2.example.com:9997   If using Indexer cluster, you can use the Cluster Master Discovery Option - Read all about it here.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/indexerdiscovery    Regarding Smart Store - Read all about it here.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/AboutSmartStore 
I have the following event  which contains an array of  records ProcessName: TestFlow270    message: TestMessage1    records: [ [-]      {"Username": "138perf_test1@netgear.com.org", "Email": "... See more...
I have the following event  which contains an array of  records ProcessName: TestFlow270    message: TestMessage1    records: [ [-]      {"Username": "138perf_test1@netgear.com.org", "Email": "tmckinnon@netgear.com.invalid", "Id": "00530000000drllAAA"}      {"Username": "clau(smtest145)@netgear.com.org", "Email": "clau@netgear.com.invalid", "Id": "0050M00000DtmxIQAR"}      {"Username": "d.mitra@netgear.com.test1", "Email": "d.mitratest1@netgear.com", "Id": "0052g000003DSbTAAW"}      {"Username": "demoalias+test1@guest.netgear.com.org", "Email": "demoalias+test1@gmail.com.invalid", "Id": "0050M00000CyZJYQA3"}      {"Username": "dlohith+eventstest1@netgear.com.org", "Email": "sfdcapp_gacks@netgear.com.invalid", "Id": "0050M00000CzJvYQAV"}      {"Username": "juan.gimenez+test1@netgear.com.apsqa2", "Email": "juan.gimenez+test1@netgear.com", "Id": "005D10000043gVxIAI"}      {"Username": "kulbir.singh+test1@netgear.com.org", "Email": "sfdcapp_gacks@netgear.com.invalid", "Id": "0050M00000CzJvaQAF"}      {"Username": "rktest1028@guest.netgear.com.org", "Email": "rktest1028@gmail.com.invalid", "Id": "0053y00000G0UmxAAF"}      {"Username": "test123test2207@test.com", "Email": "kkhatri@netgear.com", "Id": "005D10000042Mi1IAE"}      {"Username": "test123test@test.com", "Email": "test123test@test.com", "Id": "0052g000003EUIUAA4"}    ]    severity: DEBUG I tried this query  index=abc|spath input=records{} | mvexpand records{} | table ProcessName, message, severity, Username, Email, as Id it returns 10 records but all the 10 records have same value I mean the first record Is there way to parse this array with all the key value pairs  @gcusello  @yuanliu 
So why not just count the C's in one day?
Doesn't matter. You can make an app with those settings and deploy it to your Cloud instance.
we are using splunk cloud UI
If you're not hellbent of doing it with Ingest Actions, you can just use transforms to filter out all events except for the ones you want https://docs.splunk.com/Documentation/Splunk/latest/Forwardi... See more...
If you're not hellbent of doing it with Ingest Actions, you can just use transforms to filter out all events except for the ones you want https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad In your case you'd need to first have a "match-all" transform rerouting all data to nullQueue, and then a transform maching only ERROR/FATAL events sending the events to indexQueue.
I tried this but still i am seeing other events being ingested apart from :ERROR: and :FATAL:
This format looks suspiciously familiar. Check if you're using INDEXED_EXTRACTIONS on this sourcetype. If you do, the data is parsed on the UF and is not further processed on the indexer (or HF).
1. Calling specific people to answer your question is plain rude. This is a volunteer-driven community and people respond to publicly posted question if/when they want. If you want a response in a ti... See more...
1. Calling specific people to answer your question is plain rude. This is a volunteer-driven community and people respond to publicly posted question if/when they want. If you want a response in a timely manner - you have to purchase some support/consultancy services from one of many Splunk partners or Splunk's own Professional Services. 2. Digging up a thread from 5 years ago is not very likely to produce meaningful results. Create a new thread, describe your problem. If your problem is similar to an old one, you could link to the old one for reference.
Hi I am stuck in a similar situation where the following command works.     The query is when the numerator and the denominator are zero I get the following error message "Error in 'EvalCommand"... See more...
Hi I am stuck in a similar situation where the following command works.     The query is when the numerator and the denominator are zero I get the following error message "Error in 'EvalCommand": Type checking failed. '"' only takes numbers" I tried it through if statement but still it doesn't work. Could you help me on this?
The question is a bit imprecise. What do you want to do precisely? I'd interpret it as "For each day I want to get a count of accounts not appearing in the events already in any of the previous day... See more...
The question is a bit imprecise. What do you want to do precisely? I'd interpret it as "For each day I want to get a count of accounts not appearing in the events already in any of the previous days". Is that right? Also how do you treat the first day of such summary? Because all acccounts from the first day would get shown this way first day.
With the new version, there are a number of changes, have a look through this doc, in short, you need to ensure a number of new indexes are in place _ds* see the doc and there's another setting in th... See more...
With the new version, there are a number of changes, have a look through this doc, in short, you need to ensure a number of new indexes are in place _ds* see the doc and there's another setting in the output.conf of the deployment server and add the new whitelist indexes to the UF's. Try these if it still fails log a support call. https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers 
Hi  You might then be able to apply a regex pattern to say to NOT not match ERROR or FATAL, therefore keep them, and discard the rest.  Try this  ^(?!.*(ERROR|FATAL)).*$
Thank you @gcusello, that worked
Hello! I have recently upgraded my splunk enterprise servers from 9.1.2 to 9.2.1. I noticed the following web behaviors in deployment server ; 1. When searching for hostname, it takes a lon... See more...
Hello! I have recently upgraded my splunk enterprise servers from 9.1.2 to 9.2.1. I noticed the following web behaviors in deployment server ; 1. When searching for hostname, it takes a long time to load 2. Server class and app for (any) host is not reflecting correctly. This was crossed checked on CLI serverclass.conf    Wondering if anyone face this issue and if it is a GUI bug.
Hi @blbr123 , test it in Splunk using the regex command. Ciao. Giuseppe
Thanks for the reply. But I forgot to mention that both are having different indexes> I am not able to use base search here.