All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

 Im currently using the query to find the cpu utilization for a few host but i want to see the average utilization per host  tag=name "CPU Utilization" | timechart span=15m max(SysStatsUtilizationCp... See more...
 Im currently using the query to find the cpu utilization for a few host but i want to see the average utilization per host  tag=name "CPU Utilization" | timechart span=15m max(SysStatsUtilizationCpu) by host limit=0   Any information would be helpful
Thank you, I am aware of that modal in MC but it gives me the same arcane name for example  >>> _ACCELERATE_111111-22222-333-4444-123456789_search_nobody_123456978_ACCELERATE_" However, the ... See more...
Thank you, I am aware of that modal in MC but it gives me the same arcane name for example  >>> _ACCELERATE_111111-22222-333-4444-123456789_search_nobody_123456978_ACCELERATE_" However, the origin host is my dedicated MC splunk server and there is only 1 accelerate report icon listed for >License Usage Data Cube, so I assume that is the culprit.    But why is it skipping?  I clicked the accelerate option, perhaps I need to adjust the max scheduled searches? Yes I found a number of garbage scheduled reports from years ago eating up resources and starving the accelerated report for the License Usage Data Cube.   I incorrectly assumed that report would have priority to resources. Thank you for your help.
Hi @Glasses2  you can look for skipped searches in moniotoring console  Scheduler Activity: Instance or deployment and bottom of the dashboard you will find panel named  Count of Skipped Rep... See more...
Hi @Glasses2  you can look for skipped searches in moniotoring console  Scheduler Activity: Instance or deployment and bottom of the dashboard you will find panel named  Count of Skipped Reports by Name and Reason
I have noticed that a saved search is chronically skipped, almost 100% but I cannot trace it back to the origin. The search name is >>> _ACCELERATE_<redacted>_search_nobody_<redacted>_ACCELERATE_ ... See more...
I have noticed that a saved search is chronically skipped, almost 100% but I cannot trace it back to the origin. The search name is >>> _ACCELERATE_<redacted>_search_nobody_<redacted>_ACCELERATE_ From _internal its in search app, report acceleration, and user nobody.  _Audit provides no clues either. How do I trace this to the source? Thank you
It sound like a good idea.
Wow, that was a lot simpler than the solutions I was trying to get working. Thank you!
Splunk audit logs will pick the src ip for the log from the incoming packet.  To me this indicates your LB is doing full blow SNAT rather than maintaining the source IP on the 'inside' portion of the... See more...
Splunk audit logs will pick the src ip for the log from the incoming packet.  To me this indicates your LB is doing full blow SNAT rather than maintaining the source IP on the 'inside' portion of the connection.  This would be an issue for your network/LB admin team to resolve if possible based on their network design. This is not something that Splunk administration/configuration can fix.
| stats values(SID) as SID values(VALUE) as VALUE by SERVICE_NAME FILE_NAME SECTION KEY | eval match=if(mvcount(SID) = 2 AND mvcount(VALUE) = 1,"Yes", "No")
Hello, I have the following dataset. It consists of configuration parameters from multiple systems. Each system has somewhere in the neighborhood of 3000-5000 parameters, some of which will not exist... See more...
Hello, I have the following dataset. It consists of configuration parameters from multiple systems. Each system has somewhere in the neighborhood of 3000-5000 parameters, some of which will not exist in all systems. I am trying to come up with a list of unique combinations of parameters with an Matching flag which shows whether the value is identical between both systems. It should indicate a false flag if the parameter exists in either system, but not the other, or if the parameter exists in both systems but with different values. The parameters are identified by a unique combination of SERVICE_NAME, FILE_NAME, SECTION and KEY (all four are required to be the same). And the system is identified by SID. The data look like this: SID SERVICE_NAME FILE_NAME SECTION KEY VALUE AAA index global.ini global timezone_dataset 123 AAA dpserver index.ini password policy minimal_password_length 16 AAA index index.ini flexible_table reclaim_interval 3600 AAA dpserver global.ini abstract_sql_plan max_count 1000000 BBB dpserver index.ini password policy minimal_password_length 16 BBB index index.ini password policy minimal_password_length 25 BBB dpserver global.ini abstract_sql_plan max_count 1000000 BBB index index.ini mergedog check_interval 60000   The data is in a dashboard, along with drop-downs to select two systems to be compared. One a user selects system AAA and system BBB, I would like the result to show: SERVICE_NAME FILE_NAME SECTION KEY Match index global.ini global timezone_dataset No dpserver index.ini password policy minimal_password_length Yes index index.ini flexible_table reclaim_interval No dpserver global.ini abstract_sql_plan max_count Yes index index.ini password policy minimal_password_length No index index.ini mergedog check_interval No   I have tried many different SPL searches, but none have provided the intended result. I would greatly appreciate any assistance or guidance. Cheers, David
I was looking for a solution to show / hide tables on my dashboard and the set / unset token were very useful to me. Thanks!
Could the Splunk Add-on for Salesforce team clarify whether FIPS mode is supported? Per https://docs.splunk.com/Documentation/AddOns/released/Overview/Add-onsandFIPsmode it seems certain Add-on do b... See more...
Could the Splunk Add-on for Salesforce team clarify whether FIPS mode is supported? Per https://docs.splunk.com/Documentation/AddOns/released/Overview/Add-onsandFIPsmode it seems certain Add-on do but there doesn't seem to be a definitive list of what supports it and what doesn't.
thank you for your response, I have tried your query but not getting the user not logged in for last 7 days 30d or 90d. it showing total 0,i need to show by selecting the time range it should automat... See more...
thank you for your response, I have tried your query but not getting the user not logged in for last 7 days 30d or 90d. it showing total 0,i need to show by selecting the time range it should automatically show the result which user not logged into splunk web UI.  For example we have 100 account in user list, only 10 users are actively login in, remaining user need to identify the when they last logged into splunk. 
thank you for your response, I have tried your query but not getting the user not logged in for last 7 days 30d or 90d. By selecting the time range it should automatically show the result which user ... See more...
thank you for your response, I have tried your query but not getting the user not logged in for last 7 days 30d or 90d. By selecting the time range it should automatically show the result which user not logged into splunk web UI.  For example we have 100 account in user list, only 10 users are actively login in, remaining user need to identify the when they last logged into splunk. 
Thank You! I don't think I can remove the last  console.warn() because the 'try 'of the  Unset token is depend on it so now the error don't appear on the dashboard but the unset token not working ... See more...
Thank You! I don't think I can remove the last  console.warn() because the 'try 'of the  Unset token is depend on it so now the error don't appear on the dashboard but the unset token not working Is there a way to unquarantine the console.js from the SH?
We use emails as alert outputs, arriving to a shared mailbox, getting alerts from other products as well. Then we have a power automate listening to the mailbox, catching those alert emails and sendi... See more...
We use emails as alert outputs, arriving to a shared mailbox, getting alerts from other products as well. Then we have a power automate listening to the mailbox, catching those alert emails and sending a notification in a chat group with the whole team. Works nicely, removing all the integration pain from how many tools we use.
Thank you for the reply, will assess the workaround and check if it's fulfilling the requirement
Hi @Cleanhearty , check if the used fields (amount, gender, and category) are in the lookup and the name is exactly the same (field names are case sensitive). then check the amount field format. c... See more...
Hi @Cleanhearty , check if the used fields (amount, gender, and category) are in the lookup and the name is exactly the same (field names are case sensitive). then check the amount field format. ciao. Giuseppe
Thanks for the help. Unfortunately it didnt return any results(statistics(0)). That's weird.  PS: I replaced the file name with the origial.
No matter how much I challenge my management, they want (like insist strongly) to know when there is no events under the 5min, basically before one of the 20k users tell us. Depending how long the jo... See more...
No matter how much I challenge my management, they want (like insist strongly) to know when there is no events under the 5min, basically before one of the 20k users tell us. Depending how long the job will take, I'll adapt the each minute to 5', 10', or what looks acceptable.. 
Splunk SPL works on a pipeline of events, so, in a way, they are streamed from one command to the next | makeresults ``` Start and event pipeline with one event ``` ``` Set the _raw field to the JSO... See more...
Splunk SPL works on a pipeline of events, so, in a way, they are streamed from one command to the next | makeresults ``` Start and event pipeline with one event ``` ``` Set the _raw field to the JSON string ``` | eval _raw="{\"json\": {\"class\": \"net.ttddyy.dsproxy.support.SLF4JLogUtils\"}}" ``` Extract the JSON from the _raw field, specifically the json.class field ``` | spath input=_raw path=json.class ``` Replace value in field and store in new field ``` | eval class=replace('json.class',"ttddyy", "goddamm-easy") ``` Show this as a table (effectively removes all other fields from the events in the pipeline) ``` | table class