All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Unable to understand solution , could you please elaborate more    I see in raw data as below eventTimestamp=2024-04-04T02:24:52.762129638)   i would like extract time from above like = 02:24 
Can anyone explain if the following issues could be interconnected? Storage Limit: Splunk’s storage is nearing its limit. Could this be affecting the performance or functionality of other components... See more...
Can anyone explain if the following issues could be interconnected? Storage Limit: Splunk’s storage is nearing its limit. Could this be affecting the performance or functionality of other components? Permission Error: An error message indicates that the “Splunk_SA_CIM” app either does not exist or lacks sufficient permissions. Could this be causing issues with data access or processing? Transparent Huge Pages (THP) Status: THP is not disabled. It’s known that THP can interfere with Splunk’s memory management. Could this be contributing to the problems? Memory and Ulimit: Could memory constraints or ulimit settings be causing errors? Remote Search Process Failure: There was a failure in the remote search process on a peer, leading to potentially incomplete search results. The search process on the peer (Affected indexer) ended prematurely. The error message suggests that the application “Splunk_SA_CIM” does not exist. Could this be related to the aforementioned “Splunk_SA_CIM” error? Could these issues be interconnected, and if so, how? Could resolving one issue potentially alleviate the others?
Hi @bhaskar5428, Your rex command seems trying to extract Time field from @timestamp field. Can you please show the raw data by clicking "Show as raw text" selection under the raw event? Splunk sho... See more...
Hi @bhaskar5428, Your rex command seems trying to extract Time field from @timestamp field. Can you please show the raw data by clicking "Show as raw text" selection under the raw event? Splunk shows JSON events as formatted but rex works on real text itself.  We cannot compare your regex and raw data using this  screen capture.  
I am planning to provide basic splunk session to my team. Can you help if any cheatsheet available online which I can download easily.
=========================================== Query used  index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND messag... See more...
=========================================== Query used  index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  -- this is not working |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time =========================================== This how raw data looks like i  would like to extract only time , also suggest how can i covert to AM/PM   Kindly provide solution.    
Hi @viktoriiants., How about something like this: index=_internal | eval dayOfWeek=strftime(_time, "%A"), date=strftime(_time, "%Y-%m-%d") | eval dayNum=tonumber(strftime(_time,"%w")) + 1 ``` 1=S... See more...
Hi @viktoriiants., How about something like this: index=_internal | eval dayOfWeek=strftime(_time, "%A"), date=strftime(_time, "%Y-%m-%d") | eval dayNum=tonumber(strftime(_time,"%w")) + 1 ``` 1=Sunday, ..., 7=Saturday``` | stats count as "Session count" by dayOfWeek, date | addtotals col=t row=f | eval sort = if(isnull(date),1,0) | sort - sort + date | fields - sort Here we're creating a new temporary field to sort on, where we set it to 1 for our total row, and 0 for all other rows. Then we sort by this column and the date column. Finally, we remove the "sort" column.
This issue was resolved by increasing the MetaSpace value to 256MB instead of default value 64MB.
Hi  How do i seperate multiple error instead " OK " Invalid password, reset password, permission denied etc   index=events event.Properties.errMessage != "Invalid LoginID","Account Temporarily Lo... See more...
Hi  How do i seperate multiple error instead " OK " Invalid password, reset password, permission denied etc   index=events event.Properties.errMessage != "Invalid LoginID","Account Temporarily Locked Out","Permission denied""Unauthorized user","Account Pending Verification","Invalid parameter value" | stats count by event.Properties.errMessage  
Figured out the solution: 1. I just use $name$ to get the click event object line name. 2. Put latest="+1" 3. Removed all defined tokens and put the actual values into the search.   "eventHa... See more...
Figured out the solution: 1. I just use $name$ to get the click event object line name. 2. Put latest="+1" 3. Removed all defined tokens and put the actual values into the search.   "eventHandlers": [ { "type": "drilldown.linkToSearch", "options": { "enableSmartSources": true, "query": "... where match(Object,\"$name$\") and match(Object_Time,\"$value$\")\r\n| ...", "earliest": "$row._time.value$", "latest": "+1m", "type": "custom", "newTab": true } } ] The
I've just checked to see how by "subsearch" would run, I've changed my time picker to 30 mins and it hasn't run anything, its been stuck at 0 of 0 events matched with The question is: have you m... See more...
I've just checked to see how by "subsearch" would run, I've changed my time picker to 30 mins and it hasn't run anything, its been stuck at 0 of 0 events matched with The question is: have you manually verified that your data in the new time period actually contain matching hosts?  Here is a quick way to confirm that the subsearch method works. 1. Select a couple hosts that you know some events match in this time period match.  Say, host1 and host2.  Run this search with the chosen time picker.   | tstats values(source) where index=* (host="host1" OR host="host2") earliest=-30m@h latest=-0@h by host, index   If this search gives you no output, you need to find another couple of hosts till you can get non-zero output. (Hint: it is best to run all tests with a time boundaries where you know your tests will not cross.  I would suggesting using fixed earliest/latest rather than time picker.  For example, earliest=-30m@h latest=-0m@h) 2. Make sure that host1 and host2 exist in mylookup.csv with this search   | inputookup mylookup.csv where host IN (host1, host2) | fields host | dedup host | format   As @bowesmana explained, the output should be something like search ( ( host="host1" ) OR ( host="host2" ) ) If your result is different, that means mylookup.csv does not host1 and host2. You then need to redesign/repopulate your lookup table. 3. Run the following combined search   | tstats values(source) where index=* [inputookup mylookup.csv where host IN (host1, host2) | fields host | dedup host] earliest=-30m@h latest=-0@h by host, index   This search should give you the exact same results as the first one. Then, you just remove there where clause in the subsearch. Also, the second way "lookup" uses the same concept of my previous search, I will most likely run into a "VV data is too large for serialization" error Why do you say @bowesmana's lookup method uses the same concept as your join method?  Have you tried it?  It is totally different because it doesn't involve join.  Are you suggesting that | tstats values(source) where index=* by host, index always gives you that error? What about | tstats values(source) where index=* by host? What about | tstats values(source) where index=*?  If these searches give you error, you may have some fundamental problem in your indexer.  No amount of SPL can save the day. Given the chance, however, I would use the subsearch method because it is the fastest.
No, because in official docs they mention only url = %scheme%://%host%:%port%   https://dev.splunk.com/enterprise/docs/devtools/java/logging-java/howtouseloggingjava/enableloghttpjava   Also, I t... See more...
No, because in official docs they mention only url = %scheme%://%host%:%port%   https://dev.splunk.com/enterprise/docs/devtools/java/logging-java/howtouseloggingjava/enableloghttpjava   Also, I tried including HEC REST endpoint but not working in my case.
There a few questions in regards to the Interactions for Dashboard studios. This is the current code I have: {     "drilldown": "true",     "visualizations": {         "viz_ar3dmyq9": {       ... See more...
There a few questions in regards to the Interactions for Dashboard studios. This is the current code I have: {     "drilldown": "true",     "visualizations": {         "viz_ar3dmyq9": {             "type": "splunk.line",             "dataSources": {                 "primary": "ds_jjp4wUrz"             },             "eventHandlers": [                 {                     "type": "drilldown.setToken",                     "options": {                         "tokens": [                             {                                 "token": "click_time",                                 "key": "row._time.value"                             }                         ]                     }                 }, { "type": "drilldown.linkToSearch", "options": { "query": "SomeSearchQuery", "earliest": "$click_time$", "latest": "$global_time.latest$", "type": "custom", "newTab": true } }             ],             "options": {                 "seriesColorsByField": {},                 "dataValuesDisplay": "minmax",                 "y2AxisTitleText": ""             }         },         "viz_leeY0Yzv": {             "type": "splunk.markdown",             "options": {                 "markdown": "**Value of Clicked row : $click_time$**",                 "backgroundColor": "#ffffff",                 "fontFamily": "Times New Roman",                 "fontSize": "extraLarge"             }         }     },     "dataSources": {         "ds_uHpCvdwq": {             "type": "ds.search",             "options": {                 "enableSmartSources": true,                 "query": "someCustomQuery",                 "refresh": "10s",                 "refreshType": "delay"             },             "name": "CustomQuery"         },         "ds_jjp4wUrz": {             "type": "ds.search",             "options": {                 "query": "AnotherCustomQuery",                 "refresh": "10s",                 "refreshType": "delay"             },             "name": "CustomQuery2"         }     },     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-60m@m,now"             },             "title": "Global Time Range"         }     },     "layout": {         "type": "absolute",         "options": {             "width": 1440,             "height": 960,             "display": "auto"         },         "structure": [             {                 "item": "viz_ar3dmyq9",                 "type": "block",                 "position": {                     "x": 0,                     "y": 0,                     "w": 1440,                     "h": 330                 }             },             {                 "item": "viz_leeY0Yzv",                 "type": "block",                 "position": {                     "x": 310,                     "y": 780,                     "w": 940,                     "h": 30                 }             }         ],         "globalInputs": [             "input_global_trp"         ]     },     "description": "Tracking objects",     "title": "Info" } The line chart maps the data from 'AnotherCustomQuery'  (ds_jjp4wUrz), which is a query returning a chart that gets the max object_times for the 3 objects at one minute intervals: .... | chart span=1m max(object_time) OVER _time by object Producing a 3 line chart, each line representing each object. I can get the time the user clicked on, tokenized in "click_time", but a few puzzles remain: 1. When the user clicks on a object line, how do I get the line's object_name and pass it along to my search query? 2. For the "drilldown.linkToSearch", how can I add 1 minute to the click_time and add it into the latest?      Is it possible I don't need that range and search at a specific time?  3. When I open the dashboard, the first click search says "invalid earliest time", but subsequent clicks on the chart says it's working fine. Is there are particular reason why this is happening?
Hello    We received an alert from the Upgrade Readiness App about this app not being compatible with Python 3. This app appears to be an internal Splunk app. Does anyone know anything about it? ... See more...
Hello    We received an alert from the Upgrade Readiness App about this app not being compatible with Python 3. This app appears to be an internal Splunk app. Does anyone know anything about it? Splunk Cloud version: 9.1.2312.103   Thank you!  
Hi All, I have an issue where the Data Model Wrangler app can no longer see most tags where all other apps can. The Data Model Wrangler app only sees 30 tags (None of the CIM tags) where other apps ... See more...
Hi All, I have an issue where the Data Model Wrangler app can no longer see most tags where all other apps can. The Data Model Wrangler app only sees 30 tags (None of the CIM tags) where other apps see 830 tags. The Data Model Wrangler app used to be able to see all tags and all panels were displaying correctly. Now we get no results for the Field Quality Data score due to tag visibility. The permissions on the tags are everyone read and globally shared. Has anyone experienced anything similar where one app cannot see tags that all other apps can see? Thanks
Appreciate your speedy reply!   I've just checked to see how by "subsearch" would run, I've changed my time picker to 30 mins and it hasn't run anything, its been stuck at 0 of 0 events matched wit... See more...
Appreciate your speedy reply!   I've just checked to see how by "subsearch" would run, I've changed my time picker to 30 mins and it hasn't run anything, its been stuck at 0 of 0 events matched with bar flashing....   Also, the second way "lookup" uses the same concept of my previous search, I will most likely run into a "VV data is too large for serialization" error
sample log: {"date" : "2021-01-01 00:00:00.123 | dharam=fttc-pb-12312-esse-4 | appLevel=INRO | appName=REME_CASHE_ATTEMPT_PPI | env=sit | hostName=apphost000adc | pointer=ICFD | applidName=http.ab.w... See more...
sample log: {"date" : "2021-01-01 00:00:00.123 | dharam=fttc-pb-12312-esse-4 | appLevel=INRO | appName=REME_CASHE_ATTEMPT_PPI | env=sit | hostName=apphost000adc | pointer=ICFD | applidName=http.ab.web.com|news= | list=OUT_GOING | team=norpass | Category=success | status=NEW | timeframe=20", "tags": {"host": "apphost000adc" , "example": "6788376378jhjgjhdh2h3jhj2", "region": null, "resource": "add-njdf-tydfth-asd-1"}} used below regex to extract all fields  , but  one field is not getting extracted, that is timeframe |regex  _raw= (\w+)\=(.+?) \| how to modify my regex to extract timeframe field as well.
Hi @karthi2809, Try this CSS: #input_link_split_by {width:fit-content!important;} #input_link_split_by.input-link button{ width: fit-content!important; margin-right:2px; background-colo... See more...
Hi @karthi2809, Try this CSS: #input_link_split_by {width:fit-content!important;} #input_link_split_by.input-link button{ width: fit-content!important; margin-right:2px; background-color: #3c444d; border-top-color: #3c444d; border-top-style: solid; border-top-width: 1px; border-right-color: #3c444d; border-right-style: solid; border-right-width: 1px; border-left-color:#3c444d; border-left-style: solid; border-left-width: 1px; border-top-left-radius: 10px; border-top-right-radius: 10px; transition: background-color 0.5s ease; transition: border-color 0.5s ease; } #input_link_split_by button:hover{ background-color:#d2e3a0; border-right-color: #d2e3a0; border-top-color:#d2e3a0; border-left-color:#d2e3a0; color: black; } #input_link_split_by button[aria-checked="true"]{ background-color: #d2e3a0; color: black; }   That gives you tabs that keep their colour after you have selected them: The key bit is:  #input_link_split_by button[aria-checked="true"] Which is the CSS to identify a selected tab.   Cheers, Spav
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
Forget join - that is not a splunk way of doing things. Use either a subsearch or a lookup - they may perform differently depending on data volumes, but you can do this Subsearch method | tstats v... See more...
Forget join - that is not a splunk way of doing things. Use either a subsearch or a lookup - they may perform differently depending on data volumes, but you can do this Subsearch method | tstats values(sources) where index=* [ | inputlookup mylookjup.csv | fields host | dedup host ] by index, host The subsearch will effectively return with ( host=x OR host=y OR host=z...) which is then used in the outer search. Lookup method | tstats values(sources) where index=* by index, host | lookup mylookjup.csv host This gets ALL the data from the indexes and then does the lookup to get the OS details. You can always do | where isnull(os) which will then show those hosts that do not exist in the lookup that are found in the data. Note that the lookup CSV will be case sensitive - if you want to make it insensitive, make a lookup definition and configure it as case insensitive
Hi Rohit, Are you still getting this error above in your screenshot,  I can help you in this case but your screenshot's error is very generic.  It is possible that you got this error for many diffe... See more...
Hi Rohit, Are you still getting this error above in your screenshot,  I can help you in this case but your screenshot's error is very generic.  It is possible that you got this error for many different reasons. Could you please share the database agent logs you deployed with me? Then we can easily find a solution to ix this problem. Thanks Cansel