All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all, I am new to splunk, and i have got the following error: "Field '_time' should have numerical values"  when I try to run a timechart command. I have got a csv file 'try.csv', which I re... See more...
Hi all, I am new to splunk, and i have got the following error: "Field '_time' should have numerical values"  when I try to run a timechart command. I have got a csv file 'try.csv', which I read in some fields to display, but when I initiate a timechart command I get the above error. The csv file 'try.csv' has a column named _time, which has an ISO8601 time I would appreciate any guide or help I can get, as I am relatively new to splunk Thanks
If you done that, then best course or action might be log a support ticket, as there could be another underlying issue.    
Hello everyone, I'm currently working on a Dashboard to visualize Database latency across various machines, and I'm encountering an issue with the Line Chart's SPL (Search Processing Language). The ... See more...
Hello everyone, I'm currently working on a Dashboard to visualize Database latency across various machines, and I'm encountering an issue with the Line Chart's SPL (Search Processing Language). The SPL requirement is to retrieve all values of the field ms_per_block grouped by ds_file_path and machine. Here's my SPL: index=development sourcetype=custom_function user_action=database_test ds_file=* | eval ds_file_path=ds_path."\\".ds_file | search ds_file_path="\\\\swmfs\\orca_db_january_2024\\type\\rwo.ds" | chart values(ms_per_block) by ds_file_path machine My result: My goal is to have the output where each ds_file_path value is listed in individual rows along with the corresponding machine and ms_per_block values in separate rows. I've tried using the table command: | table ds_file_path, machine, ms_per_block But this doesn't give me the desired output. The machine name is under a field, whereas I need the machine name to be a separate field, each containing its respective ms_per_block value. I feel like I'm missing something here. Any guidance on how to achieve this would be greatly appreciated. Thanks in advance!  
Hello,   Thanks for your response. I have added the necessary configuration according to the article that you shared. But we are still facing this issue.  The UI loading is slow as well.  
Hello Splunk Community, I am trying to extract the "timestamp":"1715235824441" with proper details. Could anyone help me on this. Thanks in advance .   Regards, Sahitya
We need to know particularly about how many H status were coming to C within the day(12AM to11:59PM).
Start by checking the logs These can also be set via the GUI $SPLUNK_HOME/bin/splunk set log-level HTTPServer -level DEBUG $SPLUNK_HOME/bin/splunk set log-level HttpInputDataHandler -level DEBUG $S... See more...
Start by checking the logs These can also be set via the GUI $SPLUNK_HOME/bin/splunk set log-level HTTPServer -level DEBUG $SPLUNK_HOME/bin/splunk set log-level HttpInputDataHandler -level DEBUG $SPLUNK_HOME/bin/splunk set log-level TcpInputProc -level DEBUG Remember to set back to WARN once you have finished debugging Then search - this may give you some clues for you to further investigate index=_internal source=*splunkd.log* (component=HttpInputDataHandler OR component=TcpInputProc OR component=HTTPServer)  
Try it this way around index=abc | mvexpand records{} |spath input=records{} | table ProcessName, message, severity, Username, Email, as Id
Suggestions made by @PickleRick  are probably best to go with.  In terms of it still not working - you will most likely need to adjust the reg-ex pattern based on your logs.
I've never come across an Splunk environment that uses dynamic IP's for indexers (might be asking for trouble) , but there may be some use cases, I don't know, probably cloud environments. Normally o... See more...
I've never come across an Splunk environment that uses dynamic IP's for indexers (might be asking for trouble) , but there may be some use cases, I don't know, probably cloud environments. Normally one would use static IP's and DNS service and name's for UF to Indexers communications. You would then configure your outputs.conf with those DNS names. The UF's have inbuilt functionality to spray portions of the data across the Indexers. So DNS may be the way forward for you.  Example outputs.conf    [tcpout] defaultGroup = my_indexers [tcpout:my_indexers] server = indexer1.example.com:9997, indexer2.example.com:9997   If using Indexer cluster, you can use the Cluster Master Discovery Option - Read all about it here.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/indexerdiscovery    Regarding Smart Store - Read all about it here.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/AboutSmartStore 
I have the following event  which contains an array of  records ProcessName: TestFlow270    message: TestMessage1    records: [ [-]      {"Username": "138perf_test1@netgear.com.org", "Email": "... See more...
I have the following event  which contains an array of  records ProcessName: TestFlow270    message: TestMessage1    records: [ [-]      {"Username": "138perf_test1@netgear.com.org", "Email": "tmckinnon@netgear.com.invalid", "Id": "00530000000drllAAA"}      {"Username": "clau(smtest145)@netgear.com.org", "Email": "clau@netgear.com.invalid", "Id": "0050M00000DtmxIQAR"}      {"Username": "d.mitra@netgear.com.test1", "Email": "d.mitratest1@netgear.com", "Id": "0052g000003DSbTAAW"}      {"Username": "demoalias+test1@guest.netgear.com.org", "Email": "demoalias+test1@gmail.com.invalid", "Id": "0050M00000CyZJYQA3"}      {"Username": "dlohith+eventstest1@netgear.com.org", "Email": "sfdcapp_gacks@netgear.com.invalid", "Id": "0050M00000CzJvYQAV"}      {"Username": "juan.gimenez+test1@netgear.com.apsqa2", "Email": "juan.gimenez+test1@netgear.com", "Id": "005D10000043gVxIAI"}      {"Username": "kulbir.singh+test1@netgear.com.org", "Email": "sfdcapp_gacks@netgear.com.invalid", "Id": "0050M00000CzJvaQAF"}      {"Username": "rktest1028@guest.netgear.com.org", "Email": "rktest1028@gmail.com.invalid", "Id": "0053y00000G0UmxAAF"}      {"Username": "test123test2207@test.com", "Email": "kkhatri@netgear.com", "Id": "005D10000042Mi1IAE"}      {"Username": "test123test@test.com", "Email": "test123test@test.com", "Id": "0052g000003EUIUAA4"}    ]    severity: DEBUG I tried this query  index=abc|spath input=records{} | mvexpand records{} | table ProcessName, message, severity, Username, Email, as Id it returns 10 records but all the 10 records have same value I mean the first record Is there way to parse this array with all the key value pairs  @gcusello  @yuanliu 
So why not just count the C's in one day?
Doesn't matter. You can make an app with those settings and deploy it to your Cloud instance.
we are using splunk cloud UI
If you're not hellbent of doing it with Ingest Actions, you can just use transforms to filter out all events except for the ones you want https://docs.splunk.com/Documentation/Splunk/latest/Forwardi... See more...
If you're not hellbent of doing it with Ingest Actions, you can just use transforms to filter out all events except for the ones you want https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad In your case you'd need to first have a "match-all" transform rerouting all data to nullQueue, and then a transform maching only ERROR/FATAL events sending the events to indexQueue.
I tried this but still i am seeing other events being ingested apart from :ERROR: and :FATAL:
This format looks suspiciously familiar. Check if you're using INDEXED_EXTRACTIONS on this sourcetype. If you do, the data is parsed on the UF and is not further processed on the indexer (or HF).
1. Calling specific people to answer your question is plain rude. This is a volunteer-driven community and people respond to publicly posted question if/when they want. If you want a response in a ti... See more...
1. Calling specific people to answer your question is plain rude. This is a volunteer-driven community and people respond to publicly posted question if/when they want. If you want a response in a timely manner - you have to purchase some support/consultancy services from one of many Splunk partners or Splunk's own Professional Services. 2. Digging up a thread from 5 years ago is not very likely to produce meaningful results. Create a new thread, describe your problem. If your problem is similar to an old one, you could link to the old one for reference.
Hi I am stuck in a similar situation where the following command works.     The query is when the numerator and the denominator are zero I get the following error message "Error in 'EvalCommand"... See more...
Hi I am stuck in a similar situation where the following command works.     The query is when the numerator and the denominator are zero I get the following error message "Error in 'EvalCommand": Type checking failed. '"' only takes numbers" I tried it through if statement but still it doesn't work. Could you help me on this?
The question is a bit imprecise. What do you want to do precisely? I'd interpret it as "For each day I want to get a count of accounts not appearing in the events already in any of the previous day... See more...
The question is a bit imprecise. What do you want to do precisely? I'd interpret it as "For each day I want to get a count of accounts not appearing in the events already in any of the previous days". Is that right? Also how do you treat the first day of such summary? Because all acccounts from the first day would get shown this way first day.