All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi  How do i seperate multiple error instead " OK " Invalid password, reset password, permission denied etc   index=events event.Properties.errMessage != "Invalid LoginID","Account Temporarily Lo... See more...
Hi  How do i seperate multiple error instead " OK " Invalid password, reset password, permission denied etc   index=events event.Properties.errMessage != "Invalid LoginID","Account Temporarily Locked Out","Permission denied""Unauthorized user","Account Pending Verification","Invalid parameter value" | stats count by event.Properties.errMessage  
Figured out the solution: 1. I just use $name$ to get the click event object line name. 2. Put latest="+1" 3. Removed all defined tokens and put the actual values into the search.   "eventHa... See more...
Figured out the solution: 1. I just use $name$ to get the click event object line name. 2. Put latest="+1" 3. Removed all defined tokens and put the actual values into the search.   "eventHandlers": [ { "type": "drilldown.linkToSearch", "options": { "enableSmartSources": true, "query": "... where match(Object,\"$name$\") and match(Object_Time,\"$value$\")\r\n| ...", "earliest": "$row._time.value$", "latest": "+1m", "type": "custom", "newTab": true } } ] The
I've just checked to see how by "subsearch" would run, I've changed my time picker to 30 mins and it hasn't run anything, its been stuck at 0 of 0 events matched with The question is: have you m... See more...
I've just checked to see how by "subsearch" would run, I've changed my time picker to 30 mins and it hasn't run anything, its been stuck at 0 of 0 events matched with The question is: have you manually verified that your data in the new time period actually contain matching hosts?  Here is a quick way to confirm that the subsearch method works. 1. Select a couple hosts that you know some events match in this time period match.  Say, host1 and host2.  Run this search with the chosen time picker.   | tstats values(source) where index=* (host="host1" OR host="host2") earliest=-30m@h latest=-0@h by host, index   If this search gives you no output, you need to find another couple of hosts till you can get non-zero output. (Hint: it is best to run all tests with a time boundaries where you know your tests will not cross.  I would suggesting using fixed earliest/latest rather than time picker.  For example, earliest=-30m@h latest=-0m@h) 2. Make sure that host1 and host2 exist in mylookup.csv with this search   | inputookup mylookup.csv where host IN (host1, host2) | fields host | dedup host | format   As @bowesmana explained, the output should be something like search ( ( host="host1" ) OR ( host="host2" ) ) If your result is different, that means mylookup.csv does not host1 and host2. You then need to redesign/repopulate your lookup table. 3. Run the following combined search   | tstats values(source) where index=* [inputookup mylookup.csv where host IN (host1, host2) | fields host | dedup host] earliest=-30m@h latest=-0@h by host, index   This search should give you the exact same results as the first one. Then, you just remove there where clause in the subsearch. Also, the second way "lookup" uses the same concept of my previous search, I will most likely run into a "VV data is too large for serialization" error Why do you say @bowesmana's lookup method uses the same concept as your join method?  Have you tried it?  It is totally different because it doesn't involve join.  Are you suggesting that | tstats values(source) where index=* by host, index always gives you that error? What about | tstats values(source) where index=* by host? What about | tstats values(source) where index=*?  If these searches give you error, you may have some fundamental problem in your indexer.  No amount of SPL can save the day. Given the chance, however, I would use the subsearch method because it is the fastest.
No, because in official docs they mention only url = %scheme%://%host%:%port%   https://dev.splunk.com/enterprise/docs/devtools/java/logging-java/howtouseloggingjava/enableloghttpjava   Also, I t... See more...
No, because in official docs they mention only url = %scheme%://%host%:%port%   https://dev.splunk.com/enterprise/docs/devtools/java/logging-java/howtouseloggingjava/enableloghttpjava   Also, I tried including HEC REST endpoint but not working in my case.
There a few questions in regards to the Interactions for Dashboard studios. This is the current code I have: {     "drilldown": "true",     "visualizations": {         "viz_ar3dmyq9": {       ... See more...
There a few questions in regards to the Interactions for Dashboard studios. This is the current code I have: {     "drilldown": "true",     "visualizations": {         "viz_ar3dmyq9": {             "type": "splunk.line",             "dataSources": {                 "primary": "ds_jjp4wUrz"             },             "eventHandlers": [                 {                     "type": "drilldown.setToken",                     "options": {                         "tokens": [                             {                                 "token": "click_time",                                 "key": "row._time.value"                             }                         ]                     }                 }, { "type": "drilldown.linkToSearch", "options": { "query": "SomeSearchQuery", "earliest": "$click_time$", "latest": "$global_time.latest$", "type": "custom", "newTab": true } }             ],             "options": {                 "seriesColorsByField": {},                 "dataValuesDisplay": "minmax",                 "y2AxisTitleText": ""             }         },         "viz_leeY0Yzv": {             "type": "splunk.markdown",             "options": {                 "markdown": "**Value of Clicked row : $click_time$**",                 "backgroundColor": "#ffffff",                 "fontFamily": "Times New Roman",                 "fontSize": "extraLarge"             }         }     },     "dataSources": {         "ds_uHpCvdwq": {             "type": "ds.search",             "options": {                 "enableSmartSources": true,                 "query": "someCustomQuery",                 "refresh": "10s",                 "refreshType": "delay"             },             "name": "CustomQuery"         },         "ds_jjp4wUrz": {             "type": "ds.search",             "options": {                 "query": "AnotherCustomQuery",                 "refresh": "10s",                 "refreshType": "delay"             },             "name": "CustomQuery2"         }     },     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-60m@m,now"             },             "title": "Global Time Range"         }     },     "layout": {         "type": "absolute",         "options": {             "width": 1440,             "height": 960,             "display": "auto"         },         "structure": [             {                 "item": "viz_ar3dmyq9",                 "type": "block",                 "position": {                     "x": 0,                     "y": 0,                     "w": 1440,                     "h": 330                 }             },             {                 "item": "viz_leeY0Yzv",                 "type": "block",                 "position": {                     "x": 310,                     "y": 780,                     "w": 940,                     "h": 30                 }             }         ],         "globalInputs": [             "input_global_trp"         ]     },     "description": "Tracking objects",     "title": "Info" } The line chart maps the data from 'AnotherCustomQuery'  (ds_jjp4wUrz), which is a query returning a chart that gets the max object_times for the 3 objects at one minute intervals: .... | chart span=1m max(object_time) OVER _time by object Producing a 3 line chart, each line representing each object. I can get the time the user clicked on, tokenized in "click_time", but a few puzzles remain: 1. When the user clicks on a object line, how do I get the line's object_name and pass it along to my search query? 2. For the "drilldown.linkToSearch", how can I add 1 minute to the click_time and add it into the latest?      Is it possible I don't need that range and search at a specific time?  3. When I open the dashboard, the first click search says "invalid earliest time", but subsequent clicks on the chart says it's working fine. Is there are particular reason why this is happening?
Hello    We received an alert from the Upgrade Readiness App about this app not being compatible with Python 3. This app appears to be an internal Splunk app. Does anyone know anything about it? ... See more...
Hello    We received an alert from the Upgrade Readiness App about this app not being compatible with Python 3. This app appears to be an internal Splunk app. Does anyone know anything about it? Splunk Cloud version: 9.1.2312.103   Thank you!  
Hi All, I have an issue where the Data Model Wrangler app can no longer see most tags where all other apps can. The Data Model Wrangler app only sees 30 tags (None of the CIM tags) where other apps ... See more...
Hi All, I have an issue where the Data Model Wrangler app can no longer see most tags where all other apps can. The Data Model Wrangler app only sees 30 tags (None of the CIM tags) where other apps see 830 tags. The Data Model Wrangler app used to be able to see all tags and all panels were displaying correctly. Now we get no results for the Field Quality Data score due to tag visibility. The permissions on the tags are everyone read and globally shared. Has anyone experienced anything similar where one app cannot see tags that all other apps can see? Thanks
Appreciate your speedy reply!   I've just checked to see how by "subsearch" would run, I've changed my time picker to 30 mins and it hasn't run anything, its been stuck at 0 of 0 events matched wit... See more...
Appreciate your speedy reply!   I've just checked to see how by "subsearch" would run, I've changed my time picker to 30 mins and it hasn't run anything, its been stuck at 0 of 0 events matched with bar flashing....   Also, the second way "lookup" uses the same concept of my previous search, I will most likely run into a "VV data is too large for serialization" error
sample log: {"date" : "2021-01-01 00:00:00.123 | dharam=fttc-pb-12312-esse-4 | appLevel=INRO | appName=REME_CASHE_ATTEMPT_PPI | env=sit | hostName=apphost000adc | pointer=ICFD | applidName=http.ab.w... See more...
sample log: {"date" : "2021-01-01 00:00:00.123 | dharam=fttc-pb-12312-esse-4 | appLevel=INRO | appName=REME_CASHE_ATTEMPT_PPI | env=sit | hostName=apphost000adc | pointer=ICFD | applidName=http.ab.web.com|news= | list=OUT_GOING | team=norpass | Category=success | status=NEW | timeframe=20", "tags": {"host": "apphost000adc" , "example": "6788376378jhjgjhdh2h3jhj2", "region": null, "resource": "add-njdf-tydfth-asd-1"}} used below regex to extract all fields  , but  one field is not getting extracted, that is timeframe |regex  _raw= (\w+)\=(.+?) \| how to modify my regex to extract timeframe field as well.
Hi @karthi2809, Try this CSS: #input_link_split_by {width:fit-content!important;} #input_link_split_by.input-link button{ width: fit-content!important; margin-right:2px; background-colo... See more...
Hi @karthi2809, Try this CSS: #input_link_split_by {width:fit-content!important;} #input_link_split_by.input-link button{ width: fit-content!important; margin-right:2px; background-color: #3c444d; border-top-color: #3c444d; border-top-style: solid; border-top-width: 1px; border-right-color: #3c444d; border-right-style: solid; border-right-width: 1px; border-left-color:#3c444d; border-left-style: solid; border-left-width: 1px; border-top-left-radius: 10px; border-top-right-radius: 10px; transition: background-color 0.5s ease; transition: border-color 0.5s ease; } #input_link_split_by button:hover{ background-color:#d2e3a0; border-right-color: #d2e3a0; border-top-color:#d2e3a0; border-left-color:#d2e3a0; color: black; } #input_link_split_by button[aria-checked="true"]{ background-color: #d2e3a0; color: black; }   That gives you tabs that keep their colour after you have selected them: The key bit is:  #input_link_split_by button[aria-checked="true"] Which is the CSS to identify a selected tab.   Cheers, Spav
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
Forget join - that is not a splunk way of doing things. Use either a subsearch or a lookup - they may perform differently depending on data volumes, but you can do this Subsearch method | tstats v... See more...
Forget join - that is not a splunk way of doing things. Use either a subsearch or a lookup - they may perform differently depending on data volumes, but you can do this Subsearch method | tstats values(sources) where index=* [ | inputlookup mylookjup.csv | fields host | dedup host ] by index, host The subsearch will effectively return with ( host=x OR host=y OR host=z...) which is then used in the outer search. Lookup method | tstats values(sources) where index=* by index, host | lookup mylookjup.csv host This gets ALL the data from the indexes and then does the lookup to get the OS details. You can always do | where isnull(os) which will then show those hosts that do not exist in the lookup that are found in the data. Note that the lookup CSV will be case sensitive - if you want to make it insensitive, make a lookup definition and configure it as case insensitive
Hi Rohit, Are you still getting this error above in your screenshot,  I can help you in this case but your screenshot's error is very generic.  It is possible that you got this error for many diffe... See more...
Hi Rohit, Are you still getting this error above in your screenshot,  I can help you in this case but your screenshot's error is very generic.  It is possible that you got this error for many different reasons. Could you please share the database agent logs you deployed with me? Then we can easily find a solution to ix this problem. Thanks Cansel
I've not used durable searches, so I am not totally sure how they work in terms of timestamp data in the index, however, have you tried to include the durable_cursor in your stats like this index=_i... See more...
I've not used durable searches, so I am not totally sure how they work in terms of timestamp data in the index, however, have you tried to include the durable_cursor in your stats like this index=_internal sourcetype=scheduler earliest=-1h@h latest=now ``` Find the latest durable_cursor for this saved search ``` | eventstats max(durable_cursor) as durable_cursor by savedsearch_name ``` and include it in the stats ``` | stats latest(status) as FirstStatus max(durable_cursor) as durable_cursor by scheduled_time savedsearch_name | search NOT FirstStatus IN ("success","delegated_remote") However, I don't see how you can do the if test when you do not have next_scheduled_time in the _internal index data - you will need to use the REST api to get next scheduled time. Or maybe you can make the eventstats/stats do this | eventstats max(durable_cursor) as durable_cursor max(eval(if(status="success", scheduled_time, null()))) as max_success_scheduled_time by savedsearch_name | stats latest(status) as FirstStatus max(durable_cursor) as durable_cursor max(max_success_scheduled_time) as max_success_scheduled_time by scheduled_time savedsearch_name but I am unfamiliar with durable searches, so don't know how these timestamps work
Hi all, getting to grips with SPL and would be forever grateful if someone could lend their brain for the below:   I've got the lookup in the format below: (Fields) -->  host, os, os version ... See more...
Hi all, getting to grips with SPL and would be forever grateful if someone could lend their brain for the below:   I've got the lookup in the format below: (Fields) -->  host, os, os version ----------------------------------------- (Values) ---> Server01, Windows, Windows Server 2019   But in my case, this lookup has 3000 field values, I want to know their source values in Splunk (This lookup was generated by a match condition with another, so I KNOW that these hosts are present in my Splunk env)   I basically need a way to do the following:   "| tstats values(sources) where index=* host=(WHATEVER IS IN MY LOOKUP HOST FIELD) by index, host"   But i can't seem to find a way, I did try to originally match the below:   | tstats values(source) where index=* by host, index | join type=inner host | [|inputlookup mylookup.csv | fields host | dedup host]   But my results were too large to handle by Splunk, plz help
Thank you for reply. I did a simple test on simple text event data and |eval test=case(x=="X", a+b) does work.  
My environment consists of 1 search head, 1 manager, and 3 indexers. I added another search head so that I can put enterprise security on it but when I run any search i get this error.  (only rea... See more...
My environment consists of 1 search head, 1 manager, and 3 indexers. I added another search head so that I can put enterprise security on it but when I run any search i get this error.  (only reason i did index=* was to show that ALL indexes are like this and no matter what I search this happens. What I'm the most confused about is why is the bottom portion (where the search results are) greyed out and I cant interact with it.  Here's the last few lines from the search.log if more is required i can send more of the log. The log is just really long. 04-03-2024 18:00:38.937 INFO SearchStatusEnforcer [11858 StatusEnforcerThread] - sid=1712181568.6, newState=BAD_INPUT_CANCEL, message=Search auto-canceled 04-03-2024 18:00:38.937 ERROR SearchStatusEnforcer [11858 StatusEnforcerThread] - SearchMessage orig_component=SearchStatusEnforcer sid=1712181568.6 message_key= message=Search auto-canceled 04-03-2024 18:00:38.937 INFO SearchStatusEnforcer [11858 StatusEnforcerThread] - State changed to BAD_INPUT_CANCEL: Search auto-canceled 04-03-2024 18:00:38.945 INFO TimelineCreator [11862 phase_1] - Commit timeline at cursor=1712168952.000000 04-03-2024 18:00:38.945 WARN DispatchExecutor [11862 phase_1] - Execution status=CANCELLED: Search has been cancelled 04-03-2024 18:00:38.945 INFO ReducePhaseExecutor [11862 phase_1] - Ending phase_1 04-03-2024 18:00:38.945 INFO UserManager [11862 phase_1] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.948 INFO UserManager [11858 StatusEnforcerThread] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.950 INFO DispatchManager [11855 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='1712181568.6', username='b.morin') 04-03-2024 18:00:38.950 INFO UserManager [11855 searchOrchestrator] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.950 ERROR ScopedAliveProcessToken [11855 searchOrchestrator] - Failed to remove alive token file='/opt/splunk/var/run/splunk/dispatch/1712181568.6/alive.token'. No such file or directory 04-03-2024 18:00:38.950 INFO SearchOrchestrator [11852 RunDispatch] - SearchOrchestrator is destructed. sid=1712181568.6, eval_only=0 04-03-2024 18:00:38.952 INFO UserManager [11861 SearchResultExecutorThread] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.961 INFO SearchStatusEnforcer [11852 RunDispatch] - SearchStatusEnforcer is already terminated 04-03-2024 18:00:38.961 INFO UserManager [11852 RunDispatch] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.961 INFO LookupDataProvider [11852 RunDispatch] - Clearing out lookup shared provider map 04-03-2024 18:00:38.962 INFO dispatchRunner [10908 MainThread] - RunDispatch is done: sid=1712181568.6, exit=0  
Alternatively, without having to know the names of the fields | untable Name Date value | appendpipe [| stats count(eval(value > 0)) as value by Name | eval Date="Count_Of_Rows_With_Data"] |... See more...
Alternatively, without having to know the names of the fields | untable Name Date value | appendpipe [| stats count(eval(value > 0)) as value by Name | eval Date="Count_Of_Rows_With_Data"] | xyseries Name Date value
| eval Count_Of_Rows_With_Data=0 | foreach 20* [| eval Count_Of_Rows_With_Data=if('<<FIELD>>' > 0, Count_Of_Rows_With_Data+1, Count_Of_Rows_With_Data)]
There is likely still something wrong with the Java installation. I remember installing JDK-17 myself and then it did not work but then I tried another package and it worked. Where are you getting yo... See more...
There is likely still something wrong with the Java installation. I remember installing JDK-17 myself and then it did not work but then I tried another package and it worked. Where are you getting your JDK from?