All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Nope -  
Hello @vishenps , First, upgrade the add-on builder version. Then, move all configurations to the default directory within the custom app from the local directory, and then remove the local director... See more...
Hello @vishenps , First, upgrade the add-on builder version. Then, move all configurations to the default directory within the custom app from the local directory, and then remove the local directory. Finally, proceed with the vetting process. Let me know how it goes Please accept the solution and hit Karma, if this helps!
Yes, i released that its not "timestamp " and its changes to "eventTimestamp" in raw data  However  modified query but still its not working. =======================================================... See more...
Yes, i released that its not "timestamp " and its changes to "eventTimestamp" in raw data  However  modified query but still its not working. ====================================================================== index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"eventTimestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  --> Please suggest here  |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time ====================================================================== attaching sample raw screenshot for your reference    
Hi Rich,   I am sorry for the poorly worded question. "You have alert, which the cron schedule says to fire at 1 PM (13:00) in CDT.  That's 11:30 PM (23:30) IST. " The issue is instead of receivi... See more...
Hi Rich,   I am sorry for the poorly worded question. "You have alert, which the cron schedule says to fire at 1 PM (13:00) in CDT.  That's 11:30 PM (23:30) IST. " The issue is instead of receiving the mail at 11:30 PM (23:30) IST, I receive it on 11:30 am IST.     If you check the mail screenshot, you can see the inline query result returned wed Apr 3 13:00, but trigger time is April 4, 01:19 am CST, and the mail reached my inbox on April 4, 11:49 am IST. Shouldn't it be actually April 3 13:19 CST and 23:49 IST?
@cmezao - I feel Upgrade Readiness App warnings nowadays generate errors from the internal App as well. I personally feel it's safe to ignore.  
please check the sample raw data , where i need time only
As of version  7.4.1 your org cert must be appended to below path:  $SPLUNK_HOME/etc/apps/Splunk_TA_aws/lib/certifi/cacert.pem
The thread you're responding to is relatively old and is not directly related to your question. To keep the Answers tidy and focused and to ensure visibility of your issue please submit your questio... See more...
The thread you're responding to is relatively old and is not directly related to your question. To keep the Answers tidy and focused and to ensure visibility of your issue please submit your question(s) as a new thread.  
If your Splunk version is 9.2 and above and running on Linux.  You could try below  https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Serverconf        
Your command says "\"timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  So it will match only if you have a part of your event containing (of course the timestamp is just an example) "timest... See more...
Your command says "\"timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  So it will match only if you have a part of your event containing (of course the timestamp is just an example) "timestamp":"2023-01-12T14:54 Since your event is formatted differently (most significantly, the "field" you're extracting from is not named "timestamp"), you need to adjust this regex. Use https://regex101.com for checking/verifying your ideas. As a side note - manipulating structured data (in your case - json) with regexes might not be the best idea.
Also please check below query which is working , however it does not giving me required output , i need only time. in Last column =============================================================== ind... See more...
Also please check below query which is working , however it does not giving me required output , i need only time. in Last column =============================================================== index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "eventTimestamp=(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"   --> Need only time  |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time ================================================================   please check screenshot for more clear understanding    
Ahhh... This is a Splunk-specific class. I thought this was supposed to be some generic HTTP POST based mechanism. OK, in this case it might indeed be inserting the proper REST endpoint on its own. ... See more...
Ahhh... This is a Splunk-specific class. I thought this was supposed to be some generic HTTP POST based mechanism. OK, in this case it might indeed be inserting the proper REST endpoint on its own. Anyway, I'd try debugging by just launching tcpdump/wireshark and verifying if there is any connectivity between your app and your HEC input (and if there is - what is going on there). You use unencrypted HTTP so you should see the traffic
Unable to understand solution , could you please elaborate more    I see in raw data as below eventTimestamp=2024-04-04T02:24:52.762129638)   i would like extract time from above like = 02:24 
Can anyone explain if the following issues could be interconnected? Storage Limit: Splunk’s storage is nearing its limit. Could this be affecting the performance or functionality of other components... See more...
Can anyone explain if the following issues could be interconnected? Storage Limit: Splunk’s storage is nearing its limit. Could this be affecting the performance or functionality of other components? Permission Error: An error message indicates that the “Splunk_SA_CIM” app either does not exist or lacks sufficient permissions. Could this be causing issues with data access or processing? Transparent Huge Pages (THP) Status: THP is not disabled. It’s known that THP can interfere with Splunk’s memory management. Could this be contributing to the problems? Memory and Ulimit: Could memory constraints or ulimit settings be causing errors? Remote Search Process Failure: There was a failure in the remote search process on a peer, leading to potentially incomplete search results. The search process on the peer (Affected indexer) ended prematurely. The error message suggests that the application “Splunk_SA_CIM” does not exist. Could this be related to the aforementioned “Splunk_SA_CIM” error? Could these issues be interconnected, and if so, how? Could resolving one issue potentially alleviate the others?
Hi @bhaskar5428, Your rex command seems trying to extract Time field from @timestamp field. Can you please show the raw data by clicking "Show as raw text" selection under the raw event? Splunk sho... See more...
Hi @bhaskar5428, Your rex command seems trying to extract Time field from @timestamp field. Can you please show the raw data by clicking "Show as raw text" selection under the raw event? Splunk shows JSON events as formatted but rex works on real text itself.  We cannot compare your regex and raw data using this  screen capture.  
I am planning to provide basic splunk session to my team. Can you help if any cheatsheet available online which I can download easily.
=========================================== Query used  index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND messag... See more...
=========================================== Query used  index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  -- this is not working |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time =========================================== This how raw data looks like i  would like to extract only time , also suggest how can i covert to AM/PM   Kindly provide solution.    
Hi @viktoriiants., How about something like this: index=_internal | eval dayOfWeek=strftime(_time, "%A"), date=strftime(_time, "%Y-%m-%d") | eval dayNum=tonumber(strftime(_time,"%w")) + 1 ``` 1=S... See more...
Hi @viktoriiants., How about something like this: index=_internal | eval dayOfWeek=strftime(_time, "%A"), date=strftime(_time, "%Y-%m-%d") | eval dayNum=tonumber(strftime(_time,"%w")) + 1 ``` 1=Sunday, ..., 7=Saturday``` | stats count as "Session count" by dayOfWeek, date | addtotals col=t row=f | eval sort = if(isnull(date),1,0) | sort - sort + date | fields - sort Here we're creating a new temporary field to sort on, where we set it to 1 for our total row, and 0 for all other rows. Then we sort by this column and the date column. Finally, we remove the "sort" column.
This issue was resolved by increasing the MetaSpace value to 256MB instead of default value 64MB.