All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@cmezao - I feel Upgrade Readiness App warnings nowadays generate errors from the internal App as well. I personally feel it's safe to ignore.  
please check the sample raw data , where i need time only
As of version  7.4.1 your org cert must be appended to below path:  $SPLUNK_HOME/etc/apps/Splunk_TA_aws/lib/certifi/cacert.pem
The thread you're responding to is relatively old and is not directly related to your question. To keep the Answers tidy and focused and to ensure visibility of your issue please submit your questio... See more...
The thread you're responding to is relatively old and is not directly related to your question. To keep the Answers tidy and focused and to ensure visibility of your issue please submit your question(s) as a new thread.  
If your Splunk version is 9.2 and above and running on Linux.  You could try below  https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Serverconf        
Your command says "\"timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  So it will match only if you have a part of your event containing (of course the timestamp is just an example) "timest... See more...
Your command says "\"timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  So it will match only if you have a part of your event containing (of course the timestamp is just an example) "timestamp":"2023-01-12T14:54 Since your event is formatted differently (most significantly, the "field" you're extracting from is not named "timestamp"), you need to adjust this regex. Use https://regex101.com for checking/verifying your ideas. As a side note - manipulating structured data (in your case - json) with regexes might not be the best idea.
Also please check below query which is working , however it does not giving me required output , i need only time. in Last column =============================================================== ind... See more...
Also please check below query which is working , however it does not giving me required output , i need only time. in Last column =============================================================== index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "eventTimestamp=(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"   --> Need only time  |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time ================================================================   please check screenshot for more clear understanding    
Ahhh... This is a Splunk-specific class. I thought this was supposed to be some generic HTTP POST based mechanism. OK, in this case it might indeed be inserting the proper REST endpoint on its own. ... See more...
Ahhh... This is a Splunk-specific class. I thought this was supposed to be some generic HTTP POST based mechanism. OK, in this case it might indeed be inserting the proper REST endpoint on its own. Anyway, I'd try debugging by just launching tcpdump/wireshark and verifying if there is any connectivity between your app and your HEC input (and if there is - what is going on there). You use unencrypted HTTP so you should see the traffic
Unable to understand solution , could you please elaborate more    I see in raw data as below eventTimestamp=2024-04-04T02:24:52.762129638)   i would like extract time from above like = 02:24 
Can anyone explain if the following issues could be interconnected? Storage Limit: Splunk’s storage is nearing its limit. Could this be affecting the performance or functionality of other components... See more...
Can anyone explain if the following issues could be interconnected? Storage Limit: Splunk’s storage is nearing its limit. Could this be affecting the performance or functionality of other components? Permission Error: An error message indicates that the “Splunk_SA_CIM” app either does not exist or lacks sufficient permissions. Could this be causing issues with data access or processing? Transparent Huge Pages (THP) Status: THP is not disabled. It’s known that THP can interfere with Splunk’s memory management. Could this be contributing to the problems? Memory and Ulimit: Could memory constraints or ulimit settings be causing errors? Remote Search Process Failure: There was a failure in the remote search process on a peer, leading to potentially incomplete search results. The search process on the peer (Affected indexer) ended prematurely. The error message suggests that the application “Splunk_SA_CIM” does not exist. Could this be related to the aforementioned “Splunk_SA_CIM” error? Could these issues be interconnected, and if so, how? Could resolving one issue potentially alleviate the others?
Hi @bhaskar5428, Your rex command seems trying to extract Time field from @timestamp field. Can you please show the raw data by clicking "Show as raw text" selection under the raw event? Splunk sho... See more...
Hi @bhaskar5428, Your rex command seems trying to extract Time field from @timestamp field. Can you please show the raw data by clicking "Show as raw text" selection under the raw event? Splunk shows JSON events as formatted but rex works on real text itself.  We cannot compare your regex and raw data using this  screen capture.  
I am planning to provide basic splunk session to my team. Can you help if any cheatsheet available online which I can download easily.
=========================================== Query used  index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND messag... See more...
=========================================== Query used  index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  -- this is not working |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time =========================================== This how raw data looks like i  would like to extract only time , also suggest how can i covert to AM/PM   Kindly provide solution.    
Hi @viktoriiants., How about something like this: index=_internal | eval dayOfWeek=strftime(_time, "%A"), date=strftime(_time, "%Y-%m-%d") | eval dayNum=tonumber(strftime(_time,"%w")) + 1 ``` 1=S... See more...
Hi @viktoriiants., How about something like this: index=_internal | eval dayOfWeek=strftime(_time, "%A"), date=strftime(_time, "%Y-%m-%d") | eval dayNum=tonumber(strftime(_time,"%w")) + 1 ``` 1=Sunday, ..., 7=Saturday``` | stats count as "Session count" by dayOfWeek, date | addtotals col=t row=f | eval sort = if(isnull(date),1,0) | sort - sort + date | fields - sort Here we're creating a new temporary field to sort on, where we set it to 1 for our total row, and 0 for all other rows. Then we sort by this column and the date column. Finally, we remove the "sort" column.
This issue was resolved by increasing the MetaSpace value to 256MB instead of default value 64MB.
Hi  How do i seperate multiple error instead " OK " Invalid password, reset password, permission denied etc   index=events event.Properties.errMessage != "Invalid LoginID","Account Temporarily Lo... See more...
Hi  How do i seperate multiple error instead " OK " Invalid password, reset password, permission denied etc   index=events event.Properties.errMessage != "Invalid LoginID","Account Temporarily Locked Out","Permission denied""Unauthorized user","Account Pending Verification","Invalid parameter value" | stats count by event.Properties.errMessage  
Figured out the solution: 1. I just use $name$ to get the click event object line name. 2. Put latest="+1" 3. Removed all defined tokens and put the actual values into the search.   "eventHa... See more...
Figured out the solution: 1. I just use $name$ to get the click event object line name. 2. Put latest="+1" 3. Removed all defined tokens and put the actual values into the search.   "eventHandlers": [ { "type": "drilldown.linkToSearch", "options": { "enableSmartSources": true, "query": "... where match(Object,\"$name$\") and match(Object_Time,\"$value$\")\r\n| ...", "earliest": "$row._time.value$", "latest": "+1m", "type": "custom", "newTab": true } } ] The
I've just checked to see how by "subsearch" would run, I've changed my time picker to 30 mins and it hasn't run anything, its been stuck at 0 of 0 events matched with The question is: have you m... See more...
I've just checked to see how by "subsearch" would run, I've changed my time picker to 30 mins and it hasn't run anything, its been stuck at 0 of 0 events matched with The question is: have you manually verified that your data in the new time period actually contain matching hosts?  Here is a quick way to confirm that the subsearch method works. 1. Select a couple hosts that you know some events match in this time period match.  Say, host1 and host2.  Run this search with the chosen time picker.   | tstats values(source) where index=* (host="host1" OR host="host2") earliest=-30m@h latest=-0@h by host, index   If this search gives you no output, you need to find another couple of hosts till you can get non-zero output. (Hint: it is best to run all tests with a time boundaries where you know your tests will not cross.  I would suggesting using fixed earliest/latest rather than time picker.  For example, earliest=-30m@h latest=-0m@h) 2. Make sure that host1 and host2 exist in mylookup.csv with this search   | inputookup mylookup.csv where host IN (host1, host2) | fields host | dedup host | format   As @bowesmana explained, the output should be something like search ( ( host="host1" ) OR ( host="host2" ) ) If your result is different, that means mylookup.csv does not host1 and host2. You then need to redesign/repopulate your lookup table. 3. Run the following combined search   | tstats values(source) where index=* [inputookup mylookup.csv where host IN (host1, host2) | fields host | dedup host] earliest=-30m@h latest=-0@h by host, index   This search should give you the exact same results as the first one. Then, you just remove there where clause in the subsearch. Also, the second way "lookup" uses the same concept of my previous search, I will most likely run into a "VV data is too large for serialization" error Why do you say @bowesmana's lookup method uses the same concept as your join method?  Have you tried it?  It is totally different because it doesn't involve join.  Are you suggesting that | tstats values(source) where index=* by host, index always gives you that error? What about | tstats values(source) where index=* by host? What about | tstats values(source) where index=*?  If these searches give you error, you may have some fundamental problem in your indexer.  No amount of SPL can save the day. Given the chance, however, I would use the subsearch method because it is the fastest.
No, because in official docs they mention only url = %scheme%://%host%:%port%   https://dev.splunk.com/enterprise/docs/devtools/java/logging-java/howtouseloggingjava/enableloghttpjava   Also, I t... See more...
No, because in official docs they mention only url = %scheme%://%host%:%port%   https://dev.splunk.com/enterprise/docs/devtools/java/logging-java/howtouseloggingjava/enableloghttpjava   Also, I tried including HEC REST endpoint but not working in my case.
There a few questions in regards to the Interactions for Dashboard studios. This is the current code I have: {     "drilldown": "true",     "visualizations": {         "viz_ar3dmyq9": {       ... See more...
There a few questions in regards to the Interactions for Dashboard studios. This is the current code I have: {     "drilldown": "true",     "visualizations": {         "viz_ar3dmyq9": {             "type": "splunk.line",             "dataSources": {                 "primary": "ds_jjp4wUrz"             },             "eventHandlers": [                 {                     "type": "drilldown.setToken",                     "options": {                         "tokens": [                             {                                 "token": "click_time",                                 "key": "row._time.value"                             }                         ]                     }                 }, { "type": "drilldown.linkToSearch", "options": { "query": "SomeSearchQuery", "earliest": "$click_time$", "latest": "$global_time.latest$", "type": "custom", "newTab": true } }             ],             "options": {                 "seriesColorsByField": {},                 "dataValuesDisplay": "minmax",                 "y2AxisTitleText": ""             }         },         "viz_leeY0Yzv": {             "type": "splunk.markdown",             "options": {                 "markdown": "**Value of Clicked row : $click_time$**",                 "backgroundColor": "#ffffff",                 "fontFamily": "Times New Roman",                 "fontSize": "extraLarge"             }         }     },     "dataSources": {         "ds_uHpCvdwq": {             "type": "ds.search",             "options": {                 "enableSmartSources": true,                 "query": "someCustomQuery",                 "refresh": "10s",                 "refreshType": "delay"             },             "name": "CustomQuery"         },         "ds_jjp4wUrz": {             "type": "ds.search",             "options": {                 "query": "AnotherCustomQuery",                 "refresh": "10s",                 "refreshType": "delay"             },             "name": "CustomQuery2"         }     },     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-60m@m,now"             },             "title": "Global Time Range"         }     },     "layout": {         "type": "absolute",         "options": {             "width": 1440,             "height": 960,             "display": "auto"         },         "structure": [             {                 "item": "viz_ar3dmyq9",                 "type": "block",                 "position": {                     "x": 0,                     "y": 0,                     "w": 1440,                     "h": 330                 }             },             {                 "item": "viz_leeY0Yzv",                 "type": "block",                 "position": {                     "x": 310,                     "y": 780,                     "w": 940,                     "h": 30                 }             }         ],         "globalInputs": [             "input_global_trp"         ]     },     "description": "Tracking objects",     "title": "Info" } The line chart maps the data from 'AnotherCustomQuery'  (ds_jjp4wUrz), which is a query returning a chart that gets the max object_times for the 3 objects at one minute intervals: .... | chart span=1m max(object_time) OVER _time by object Producing a 3 line chart, each line representing each object. I can get the time the user clicked on, tokenized in "click_time", but a few puzzles remain: 1. When the user clicks on a object line, how do I get the line's object_name and pass it along to my search query? 2. For the "drilldown.linkToSearch", how can I add 1 minute to the click_time and add it into the latest?      Is it possible I don't need that range and search at a specific time?  3. When I open the dashboard, the first click search says "invalid earliest time", but subsequent clicks on the chart says it's working fine. Is there are particular reason why this is happening?