All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In splunk how we create these CMDB fields mapped to any sourcetype when new host added as asset.. like the below fields, if we don't have C CRITICITY ENVIRONMENT FUNCTION OFFER BUSINESS UNIT C... See more...
In splunk how we create these CMDB fields mapped to any sourcetype when new host added as asset.. like the below fields, if we don't have C CRITICITY ENVIRONMENT FUNCTION OFFER BUSINESS UNIT CODEREF DATACENTER
Hi @SN1  It sounds like your license has expired or not installed correctly. Go to https://yourSplunkInstance/en-US/manager/system/licensing and check that the license is showing as valid. If your... See more...
Hi @SN1  It sounds like your license has expired or not installed correctly. Go to https://yourSplunkInstance/en-US/manager/system/licensing and check that the license is showing as valid. If your license isnt showing here, or if it cannot connect to your license server (if applicable) then you will need to resolve this before being able to search non-internal indexes Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Assuming your fields have already been extracted, try something like this | transpose 0 column_name="KeyName" | rename "row 1" as OldValue, "row 2" as NewValue | eval diff=if(OldValue!=NewValue,1,nu... See more...
Assuming your fields have already been extracted, try something like this | transpose 0 column_name="KeyName" | rename "row 1" as OldValue, "row 2" as NewValue | eval diff=if(OldValue!=NewValue,1,null()) | where diff=1
4 years????? And nothing has been done to fix it.   This part should then be removed from the code then. Here you see memory one of our HF whas upgraded from 9.3.2 to 9.4.  When all memory are us... See more...
4 years????? And nothing has been done to fix it.   This part should then be removed from the code then. Here you see memory one of our HF whas upgraded from 9.3.2 to 9.4.  When all memory are used up, it runs for some hour more and then dies.  We reported this issue just some days after 9.4.0 was released, and did get the fix just now.
Hello i am seeing this error MSE-SVSPLUNKI01] restricting search to internal indexes only (reason: [DISABLED_DUE_TO_GRACE_PERIOD,0]) how to resolve this.
Hi @uagraw01  Im not sure if the bug is related to the issue you are having, as the bug relates to the latest=now being omitted from searches where earliest=<something> is used. Is this a drilldown... See more...
Hi @uagraw01  Im not sure if the bug is related to the issue you are having, as the bug relates to the latest=now being omitted from searches where earliest=<something> is used. Is this a drilldown search from Enterprise Security? Or something else? Are you able to find the full search that was executed? It is odd that info_max_time_2 looks to contain "`bin" (according to the output) so it would be good to understand how that value could have got there! If you cant find the search, I'd look into _audit for 5 seconds eitherside of that error timestamp and start filtering down from there, maybe look for keywords like "bin" as its appears in the error. Let us know what you find so we can help further! Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @vikashumble  I think this solution on another question might work for you, instead of me copying it over, check out https://community.splunk.com/t5/Dashboards-Visualizations/How-to-find-and-show... See more...
Hi @vikashumble  I think this solution on another question might work for you, instead of me copying it over, check out https://community.splunk.com/t5/Dashboards-Visualizations/How-to-find-and-show-unique-and-missing-keys-between-two-JSON/m-p/675785 so you can get the full context. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @vikashumble  Would something like this work for you?   | makeresults | eval _raw="{\"id\": \"12345\", \"domain\": [\"test.com\",\"sample.com\",\"example.com\"]}" | eval domain=json_array_to_m... See more...
Hi @vikashumble  Would something like this work for you?   | makeresults | eval _raw="{\"id\": \"12345\", \"domain\": [\"test.com\",\"sample.com\",\"example.com\"]}" | eval domain=json_array_to_mv(json_extract(_raw,"domain")) | eval whitelistedDomains="" | foreach domain mode=multivalue [| eval whitelistedDomains=mvappend(IF(tostring(json_extract(lookup("domainallowlist.csv",json_object("domain",<<ITEM>>),json_array("isAllowed")),"isAllowed"))=="1",<<ITEM>>,null()),whitelistedDomains) ]   This relies in having an "isAllowed"=1 value in the lookup but could be adjusted to your scenario? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
It's https://prometheus.io/ support that was added almost 4 years ago(8.2.0). But it was disabled due to memory explosion since 8.2.1
Hello Splunkers!! We recently migrated Splunk from version 8.1.1 to 9.1.1 and encountered the following errors:   ERROR TimeParser [12568 SchedulerThread] - Invalid value "`bin" for time term ... See more...
Hello Splunkers!! We recently migrated Splunk from version 8.1.1 to 9.1.1 and encountered the following errors:   ERROR TimeParser [12568 SchedulerThread] - Invalid value "`bin" for time term 'latest' ERROR TimeParser [12568 SchedulerThread] - Invalid value "$info_max_time_2$" for time term 'latest' Upon reviewing the Splunk 9.1.1 release notes, I found that this issue is listed as a known bug. Has anyone observed and resolved this issue before? If you have implemented a fix, could you share the specific configuration changes or workarounds applied? Any insights on where to check (e.g., saved searches, scheduled reports, or specific configurations) would be greatly appreciated. Below is the screenshot of the known bug in 9.1.1   Thanks in advance for your help!
Can confirm that it fixed memory leak that we see on our upgraded HF server. This is not fixed in the newly released 9.4.1. But what does prometheus do in splunk? Is it some new function that was a... See more...
Can confirm that it fixed memory leak that we see on our upgraded HF server. This is not fixed in the newly released 9.4.1. But what does prometheus do in splunk? Is it some new function that was added to the 9.x server and was set to disabled?  Do not find any info in server.conf docs.
Hello All,   I have a multivalue field which contains domain names (for this case, say it is in field named emailDomains and it contains 5 values). I have a lookup named whitelistdomains which cont... See more...
Hello All,   I have a multivalue field which contains domain names (for this case, say it is in field named emailDomains and it contains 5 values). I have a lookup named whitelistdomains which contains 2000+ domains names. Now, what I want is to look for these multivalue domains names field and check if that domain name is present in my lookup. Is that possible. Example and expected output is below. I did tried doing this using mvexpand but sometimes I end up with memory issues on splunk cloud and hence want to avoid this. I tried using map, mvmap to see somehow I can pass one value at a time in inputlookup command and get the output. But so far, I am not able to figure it out properly. I did achived this via a very dirty method of using appendpipe to get list of values in lookup and then eventstats to create that variable against each event for comparison. But this made search very clunky and I am sure there are better ways of doing this? So, if you can please sugesst a better way, that would be amazing.   emailDomains field: test.com sample.com example.com   whitelistdomains Lookup data: whitelist.com sample.com something.com example.com ......and so on..   Expected output: whitelistedDomains (this is a new field after looking up all multifield values against lookup) sample.com example.com  
Hello All, I have a use case where in need to compare two json objects and highlight their key value differences. This is just to ensure that we can let OSC know only about the changes that has been... See more...
Hello All, I have a use case where in need to compare two json objects and highlight their key value differences. This is just to ensure that we can let OSC know only about the changes that has been made rather than sending both old and new json as as alert. Is that doable? I tried using foreach, spath, mvexpand but not able to figure out a proper working solution. Any help on this is much appreciated. Json1: { "id": "XXXXX", "displayName": "ANY DISPLAY NAME", "createdDateTime": "2021-10-05T07:01:58.275401+00:00", "modifiedDateTime": "2025-02-05T10:30:40.0351794+00:00", "state": "enabled", "conditions": { "applications": { "includeApplications": [ "YYYYY" ], "excludeApplications": [], "includeUserActions": [], "includeAuthenticationContextClassReferences": [], "applicationFilter": null }, "users": { "includeUsers": [], "excludeUsers": [], "includeGroups": [ "USERGROUP1", "USERGROUP2" ], "excludeGroups": [], "includeRoles": [], "excludeRoles": [] }, "userRiskLevels": [], "signInRiskLevels": [], "clientAppTypes": [ "all" ], "servicePrincipalRiskLevels": [] }, "grantControls": { "operator": "OR", "builtInControls": [ "mfa" ], "customAuthenticationFactors": [], "termsOfUse": [] }, "sessionControls": { "cloudAppSecurity": { "cloudAppSecurityType": "monitor", "isEnabled": true }, "signInFrequency": { "value": 1, "type": "hours", "authenticationType": "primaryAndSecondaryAuthentication", "frequencyInterval": "timeBased", "isEnabled": true } } }   json2: { "id": "XXXXX", "displayName": "ANY DISPLAY NAME 1", "createdDateTime": "2021-10-05T07:01:58.275401+00:00", "modifiedDateTime": "2025-02-06T10:30:40.0351794+00:00", "state": "enabled", "conditions": { "applications": { "includeApplications": [ "YYYYY" ], "excludeApplications": [], "includeUserActions": [], "includeAuthenticationContextClassReferences": [], "applicationFilter": null }, "users": { "includeUsers": [], "excludeUsers": [], "includeGroups": [ "USERGROUP1", "USERGROUP2", "USERGROUP3" ], "excludeGroups": [ "USERGROUP4" ], "includeRoles": [], "excludeRoles": [] }, "userRiskLevels": [], "signInRiskLevels": [], "clientAppTypes": [ "all" ], "servicePrincipalRiskLevels": [] }, "grantControls": { "operator": "OR", "builtInControls": [ "mfa" ], "customAuthenticationFactors": [], "termsOfUse": [] }, "sessionControls": { "cloudAppSecurity": { "cloudAppSecurityType": "block", "isEnabled": true }, "signInFrequency": { "value": 2, "type": "hours", "authenticationType": "primaryAndSecondaryAuthentication", "frequencyInterval": "timeBased", "isEnabled": true } } }   Output expected (Based on above sample jsons): KeyName , Old Value, New Value displayName, "ANY DISPLAY NAME", "ANY DISPLAY NAME 1" modifiedDateTime, "2025-02-05T10:30:40.0351794+00:00", "2025-02-06T10:30:40.0351794+00:00" users."includeGroups", ["USERGROUP1","USERGROUP2"], ["USERGROUP1","USERGROUP2", "USERGROUP3"] "excludeGroups",[],["USERGROUP4"] sessionControls."cloudAppSecurityType","moitor","block" signInFrequency."value",1,2   Thanks  
Perhaps you need to look at why the internet connection is down for so long and invest in a more robust network architecture so that the connection is maintained for a higher percentage of the time?
If you need a quick patch then it may be possible to edit the password handler code, but as this app is Splunk supported, you could submit a support ticket for it.
Hi @Skv , as I said, Splunk Forwarders can store logs on the local disks if the connection with the Indexers isn't available until the connection will be available again. It depends on the the avai... See more...
Hi @Skv , as I said, Splunk Forwarders can store logs on the local disks if the connection with the Indexers isn't available until the connection will be available again. It depends on the the availabily di the disk space: e.g. if you know that your systems generate e.g. 1GB/hour, you have to give to your forwarers, for 14 hours, 14 GB of disk so the Forwarder can store all the logs. Then you should analyze if the connection (when available) is sufficient to send all the 14 GB to the indexers, it depends on the network bandwidth and on the time of network availability. Ciao. Giuseppe
@gcusello could you explain i need to view the logs when the connenction is not there in the factory data room local and once connection up it has to send data to splunl cloud how it can  be done
Please share (anonymised) raw events for your two examples (not pretty-print formatted versions) preferably in a code block using the </> button. Please explain what your desired results would look ... See more...
Please share (anonymised) raw events for your two examples (not pretty-print formatted versions) preferably in a code block using the </> button. Please explain what your desired results would look like - for example, in requirement 2, do you want the count of the number of times the response time has been 114 over the period of time of your search? These events look like they might be JSON. Have you already extracted the JSON fields during ingestion or are you working with raw, unparsed data? The more information you can give, the quicker you are likely to receive a useful response.
I figured it out | lookup batch.csv OUTPUT startTime finishTime | eval startTime = max(mvmap(startTime, if(startTime <= _time, startTime, null()))) | eval finishTime = min(mvmap(finishTime, if(finis... See more...
I figured it out | lookup batch.csv OUTPUT startTime finishTime | eval startTime = max(mvmap(startTime, if(startTime <= _time, startTime, null()))) | eval finishTime = min(mvmap(finishTime, if(finishTime >= _time, finishTime, null()))) | lookup batch.csv startTime finishTime OUTPUT batchID
Hi @sanjai  I believe you are on to the right thing here with `collections/shared/TimeZones` If you have a look at /opt/splunk/share/splunk/search_mrsparkle/exposed/js/views/shared/preferences/glob... See more...
Hi @sanjai  I believe you are on to the right thing here with `collections/shared/TimeZones` If you have a look at /opt/splunk/share/splunk/search_mrsparkle/exposed/js/views/shared/preferences/global/GlobalSettingsContainer.jsx it references the same file, and this is the file which renders the user preferences and displays the Time zone dropdown. Have a look at including that file (collections/shared/TimeZones.js) , or maybe copying it in to your app for the purposes of testing. My paths are Linux based by the way, so might need updating for your environment. Good luck! Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will