All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I apologize if this is not in the correct location. Basically, in order to simplify things, let's say I have a dashboard with 2 panels doing 2 seperate search queries. Both are a "| stats ... See more...
Hello, I apologize if this is not in the correct location. Basically, in order to simplify things, let's say I have a dashboard with 2 panels doing 2 seperate search queries. Both are a "| stats count" and they both return values. What I want to do is change the color of the value of Panel A based on the result of Panel B, and if for example the value of Panel B is 50% larger or smaller than Panel A, then the value of Panel A should turn yellow. But I don't know how to turn the value of a panel into a variable or a token and use that variable or token to create a range based on a % of that value. Is this possible? I did some research in splunk doccumentation and I thought I found a way to make it work but I don't know if it works because I'm not able to get it working. Basically I tried doing <set token="result_token">$job.resultCount$</set> which in my mind would use the result count of the panel and convert that result into the "result_token" and then I would be able to use that "result_token" to do something like <option name="drilldown">none</option> <option name="rangeColors">["yellow","purple"]</option> <option name="rangeValues"> "result_token" > 50% = yellow</option> <option name="rangeValues"> "result_token" < 50% = yellow</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> I don't know if I made any sense.
Hi,  I have an index that return some logs with fields like _time, api names. I would like to display in dashboard or report or alert that which API has been inactive for more than one week. What I... See more...
Hi,  I have an index that return some logs with fields like _time, api names. I would like to display in dashboard or report or alert that which API has been inactive for more than one week. What I do right now is that find the most recent time with function latest(_time) and compare this with now, using relative_time. It works but the time range is all time and it takes some seconds to do so. I am worried that as the time goes it would take too long to get result.  Is there some better way to achieve that? 
I need to personalize the "Data Processing Queues" monitored made by Monitoring Console. I found that "median" aggregate function, on stats or timechart commands does not work correctly. Indeed, ... See more...
I need to personalize the "Data Processing Queues" monitored made by Monitoring Console. I found that "median" aggregate function, on stats or timechart commands does not work correctly. Indeed, launching the following search, over "all time" on  my PC (host=localhost), I obtain that median is 0 if on values there is a 0. In the example attached, the correct median is 0.73, instead Splunk calculate 0.   (group=queue host=localhost index=_internal name=* source=*metrics.log sourcetype=splunkd) | eval ingest_pipe=if(isnotnull(ingest_pipe),ingest_pipe,"none") | search ingest_pipe=* | where match(name,"agg") | eval max=if(isnotnull(max_size_kb),max_size_kb,max_size), curr=if(isnotnull(current_size_kb),current_size_kb,current_size), fill_perc=round(((curr / max) * 100),2) | timechart minspan=30s Median(fill_perc) values(fill_perc) avg(fill_perc) useother=false limit=15       Anyone else found this issue ?  
I'm currently working on a project that maps different events at different times in different service areas, and so far I've had a lot of luck with geostats. I'm fairly new to Splunk, SQL and XML but... See more...
I'm currently working on a project that maps different events at different times in different service areas, and so far I've had a lot of luck with geostats. I'm fairly new to Splunk, SQL and XML but have been able to do a lot on my own. I have two questions: 1. Each event that is accumulated in the geostats map has a value assigned to it (between 0 and 7) in a particular field. Is there a way for me to assign a color to each value? I want to be able to look at the map and be able to discern between these different values. 2. I also created a table with these values with the Lat, Long, Time and Value [the 0-7] but I want to be able to link it to my geostats map. Is there a way that one could highlight/reveal the plot point, either when hovering over a row or clicking on it within the table? I'll number and post both search strings: 1. Geostats map:       source="e:\\folder" | rex field=_raw "longitude:(?&lt;long&gt;.*) latitude:(?&lt;lat&gt;.*)" | rex field=_raw "value_id:(?&lt;Value&gt;.*)" | rex field="date_hour" "(?P&lt;Time&gt;[^\s]+)" | search long!="null"| search lat&gt;"0" | eval n=tonumber(long)| eval n=tonumber(lat) | eval lat=printf("%.*f", 8, lat) | eval long=printf("%.*f", 8, long) | eval Time=strftime(_time, "%b-%d %H:%M:%S.%Q") | geostats count longfield=long latfield=lat translatetoxy=true maxzoomlevel=10       2. Table:       source="e:\\folder" | rex field=_raw "longitude:(?&lt;long&gt;.*) latitude:(?&lt;lat&gt;.*)"| rex field=_raw "value_id:(?&lt;Value&gt;.*)" | rex field="date_hour" "(?P&lt;Time&gt;[^\s]+)" | search long!="null"| search lat&gt;"0"| eval n=tonumber(long)| eval n=tonumber(lat) | eval n=tonumber(Value)| eval long=long*-180/pow(2, 23) | eval lat=lat*90/pow(2, 23) | eval lat=printf("%.*f", 8, lat)| eval Balue=printf("%.1s",Value) | eval long=printf("%.*f", 8, long) | eval Time=strftime(_time, "%b-%d %H:%M:%S.%Q") | table lat, long, Time, Value       Also if anyone has any criticism to how I can clean this up let me know. Again, I'm fairly new to this. Thanks!
Hello,  Currently attempting to set up a Custom Java Endpoint to retrieve LDAP URL that is being monitored by the Java Agent.  I have attempted several configurations but none seem to be working.  I... See more...
Hello,  Currently attempting to set up a Custom Java Endpoint to retrieve LDAP URL that is being monitored by the Java Agent.  I have attempted several configurations but none seem to be working.  I am currently using this doc: https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/configure-instrumentation/backend-detection-rules/java-backend-detection/custom-exit-points-for-java#id-.CustomExitPointsforJavav22.1-LDAPExitPoints However, it still only recovers the automated backend LDAP information.  Any help would be appreciated.  Thanks!
Hello, Anyone know if its possible to pull back the time from all the Splunk infrastructure.  I have over 200 IDX / SHD / DEP etc etc server.  In 4 Regions around the world.  And I think my NTP is ... See more...
Hello, Anyone know if its possible to pull back the time from all the Splunk infrastructure.  I have over 200 IDX / SHD / DEP etc etc server.  In 4 Regions around the world.  And I think my NTP is failing / drifting.  And I what to show my IT Dept the problem if we have one. So is it possible to ask, all the Splunk infrastructure the time.  So I can see / show at a glance oh that IDX server is 5 mins out from its cluster buddy's? Thanks.
I have two SPlunk consoles - one has alerting, the other does not.  How do I add alerting to the one that doesn't have it. I do not have "Save as Alert"
Hi Community Support, I have a lookup file with IP addresses where all the values are IP Addresses including the very first field and its keep changing. Dummy Example: 192.168.10.10 192.168.1... See more...
Hi Community Support, I have a lookup file with IP addresses where all the values are IP Addresses including the very first field and its keep changing. Dummy Example: 192.168.10.10 192.168.10.11 192.168.10.12 Because the very first field value itself is an IP address so I want to add a field value into this lookup via Splunk search so that my lookup will show like below: ip_address 192.168.10.10 192.168.10.11 192.168.10.12 Kindly suggest how to achieve these results. Many Thanks.
Hi ,   i want to find the license utilization of  firewall logs based on severity level. can anyone help me with the query on how to find the license utilization based on particular events like e... See more...
Hi ,   i want to find the license utilization of  firewall logs based on severity level. can anyone help me with the query on how to find the license utilization based on particular events like eventid in windows logs
Hi, We have created the aggregation policies and configure the action rules to create a ticket. We have a requirement to prevent the ticket getting created for few of the hosts. How to define t... See more...
Hi, We have created the aggregation policies and configure the action rules to create a ticket. We have a requirement to prevent the ticket getting created for few of the hosts. How to define the filtering criteria to exclude the hosts so that the ticket will bot be created for them? and will the episodes get created in this case? Please clarify. Thanks.
I am trying to execute a Dashboard in splunk and I'm getting this error message. It has to do time. Be is the code. Im new to splunk and having a issue with formatting the eval function. I had to com... See more...
I am trying to execute a Dashboard in splunk and I'm getting this error message. It has to do time. Be is the code. Im new to splunk and having a issue with formatting the eval function. I had to commit out the CASE number but see if you can find out why the eval sttament is not woring in the query   index="salesforce" source="/informatica/pmrootdir/Splunk_SFDC_Logs/*-eventlogfile-splunk.csv" host="louapplps2024" CASE(xxxxxxxxxxxxxx) earliest=-7d latest=now (EVENT_TYPE="ApexExecution") QUIDDITY IN (R,V,W,H,X,M,L,K,I,E) IS_LONG_RUNNING_REQUEST=1 | dedup REQUEST_ID | eval oraDbTime=IF(!isnull(sdbDbTime),round(oraDbTime/1000000,0),oraDbTime) | eval dbCpuTime=IF(!isnull(sdbDbTime),round(dbCpuTime/1000000,0),dbCpuTime) | eval sdbDbTime=round(sdbDbTime/1000000,0) | eval reqIsNull=IF(isnull(requestId) AND entryPoint="TRIGGERS","YES","NO")| where reqIsNull="NO" | eval permitTime= (runTime-5000) | eval permitTimeInSecs= (permitTime/1000) | eval runTimeInSecs= (runTime/1000) | eval endTime=_time | eval startTime=(_time-runTimeInSecs) | eval conTime=(_time-permitTimeInSecs) $org_token$ EVENT_TYPE="LightningPageView" OR EVENT_TYPE="LightningPerformance" APP_NAME="siteforce:communityApp" $userId$ $app$ | timechart span=$timespan$ dc(USER_ID) as Views      
Good afternoon! We have a need to send a field with a dot in the message: result.code. But the request in which I specify this field fails. Request: index="main" sourcetype="testsystem-script99" ... See more...
Good afternoon! We have a need to send a field with a dot in the message: result.code. But the request in which I specify this field fails. Request: index="main" sourcetype="testsystem-script99" | transaction maxpause=10m srcMsgId Correlation_srcMsgId messageId result.code | table _time srcMsgId Correlation_srcMsgId messageId result.code | fields _time srcMsgId Correlation_srcMsgId messageId result.code | sort srcMsgId _time | streamstats current=f window=1 values(_time) as prevTime by subject | eval timeDiff=_time-prevTime | delta _time as timeDiff | where (result.code)>0   Error: Error in 'where' command: Type checking failed. The '>' operator received different types. The search job has failed due to an error. You may be able view the job in the Job Inspector.   The error does not occur with the following options: resultcode, result-code, result_code. Tell me please, what could be the problem?
Hi, I'm trying to update a KV store so that the only entries in it will be for consecutive returns from a search.   For example, say the KV store has the existing fields: Title Count ... See more...
Hi, I'm trying to update a KV store so that the only entries in it will be for consecutive returns from a search.   For example, say the KV store has the existing fields: Title Count Daily Check 1 5 Daily Check 2 1 Daily Check 3 1   and the search returns: Label Daily Check 1 Daily Check 3 Daily Check 4   The new KV Store should look like: Title Count Daily Check 1 6 Daily Check 3 2 Daily Check 4 1   Thanks!
Hi everyone, I'm stuck with an issue I can't understand... I created an app that use a custom alert action which generate events to log (this is generating a file under $SPLUNK_HOME$/var/spool/). A... See more...
Hi everyone, I'm stuck with an issue I can't understand... I created an app that use a custom alert action which generate events to log (this is generating a file under $SPLUNK_HOME$/var/spool/). An example of the file could be: Name: 1664448416_92764.stash_sourcetype1       ***SPLUNK*** index="myindex" host="Host1" source="Source1" ==##~~##~~ 1E8N3D4E6V5E7N2T9 ~~##~~##== {...event...}       I have setup an input.conf which is looking for this file:         [batch://$SPLUNK_HOME/var/spool/splunk/...stash_sourcetype1] queue = stashparsing sourcetype = stash_sourcetype1 move_policy = sinkhole crcSalt = <SOURCE>       Under my props.conf, I have :       [stash_sourcetype1] TRUNCATE = 0 # only look for ***SPLUNK*** on the first line HEADER_MODE = firstline # we can summary index past data, but rarely future data MAX_DAYS_AGO = 10000 # 5 years difference between two events MAX_DIFF_SECS_AGO = 155520000 MAX_DIFF_SECS_HENCE = 155520000 TIME_PREFIX = (?m)^\*{3}Common\sAction\sModel\*{3}.*$ MAX_TIMESTAMP_LOOKAHEAD = 25 LEARN_MODEL = false # break .stash_new custom format into events SHOULD_LINEMERGE = false BREAK_ONLY_BEFORE_DATE = false LINE_BREAKER = (\r?\n==##~~##~~ 1E8N3D4E6V5E7N2T9 ~~##~~##==\r?\n) KV_MODE = json TRANSFORMS-0parse_cam_header = orig_action_name_for_stash_cam,orig_sid_for_stash_cam,orig_rid_for_stash_cam,sourcetype_for_stash_cam TRANSFORMS-1sinkhole_cam_header = sinkhole_cam_header       As you can see, I have configured my props.conf to read the first line "***SPLUNK***" in order to recover the index, host and source. However, it continues to log all logs in the "main" index and use default values for "source" and "host". It's like it's ignoring this directive whereas it should take it into account. Does someone knows why it's ignoring this directive please ? I can't find so much documentation on this issue... For your information, I'm working on a standalone version of Splunk Enterprise. Thank you   EDIT: I've just noticed that my events are indexed using the sourcetype "stash_sourcetype1-too_small", this can be the reason why but why is it adding the "too_small" and how can I prevent it ?
Hi Splunkers, I have data like this,  Primary Key_1:      subkey_1 : subvalue_1      subkey_2 : subvalue_2 Primary Key_2:      subkey_1 : subvalue_1      subkey_2 : subvalue_2   I e... See more...
Hi Splunkers, I have data like this,  Primary Key_1:      subkey_1 : subvalue_1      subkey_2 : subvalue_2 Primary Key_2:      subkey_1 : subvalue_1      subkey_2 : subvalue_2   I extract data like this, But I want to see in splunk like key1.subkey_1 = sub_value_1   Btw, this is one event. I try in transforms.conf ($1::$2:):$3 , But I failed. And I want to extract this data, What is the best way extract this for as I want. Or is that possible ?   
Hi Community, on Universal Forwarder I see these logs:   09-29-2022 12:12:17.410 +0200 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=61... See more...
Hi Community, on Universal Forwarder I see these logs:   09-29-2022 12:12:17.410 +0200 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=61, largest_size=61, smallest_size=18     I know is related to gz files, in fact splunk is monitoring files gz. In order to increase queue size, I usually push server.conf with new values, like this.   [queue=aeq] maxSize = 2MB     It seems not working because I keep seeing in logs    Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500     Do you know how can be edited this queue size?   Thanks, Marta
Hi,   We have below problem data in lookup:  pan assestId item_deviceId phoneNumber imeID 11023 ass#ABC1#man6558962f asst#ABC1#man827631e ite#0#man76451627ahdgs ite#0#man764... See more...
Hi,   We have below problem data in lookup:  pan assestId item_deviceId phoneNumber imeID 11023 ass#ABC1#man6558962f asst#ABC1#man827631e ite#0#man76451627ahdgs ite#0#man76451627ahd75 ite#0#man76451627ahdgs 8763173699 123456789 11023 ass#ABC1#man6558962f asst#ABC1#man827631e ite#0#man76451627ahdgs ite#0#man76451627ahd75  ite#0#man76451627ahd75 8736628187 987654321   Now we require new field "Mobile_DeviceId" from "assestId"  for identical row. As per below splunk table: pan assestId item_deviceId phoneNumber imeID Mobile_DeviceId 11023 ass#ABC1#man6558962f asst#ABC1#man827631e ite#0#man76451627ahdgs ite#0#man76451627ahd75 ite#0#man76451627ahdgs 8763173699 123456789 ass#ABC1#man6558962f 11023 ass#ABC1#man6558962f asst#ABC1#man827631e ite#0#man76451627ahdgs ite#0#man76451627ahd75  ite#0#man76451627ahd75 8736628187 987654321 asst#ABC1#man827631e   Is it possible from SPL?? Please help me to create SPL. My query is:     | inputlookp abc.csv | table pan assestId item_deviceId phoneNumber imeID | eval Mobile_DeviceId=split(assestId," ")|mvexpand Mobile_DeviceId| search Mobile_DeviceId=ass#*      
I'm sure this must be possible, but I can't find a way, unfortunately there are a couple of threads on this with no solution I just want to display a table vertically, with titles as column 1 an... See more...
I'm sure this must be possible, but I can't find a way, unfortunately there are a couple of threads on this with no solution I just want to display a table vertically, with titles as column 1 and values as column 2, like a bullet list. All the information I found suggests that the "transposse" command is the way to go, but I don't know how to achieve it. Any suggestion? Field 1 Field 2 Field 3 Value Value Value   And this is how I'd like to express the table: Fields Values Field 1 Value Field 2 Value Field 3 Value
Hi, we like to know which user is in the local Administrator Group and wich is the active User Account of our windows clients. 1. to get the local admins we use  "netgroup local Administrators" a... See more...
Hi, we like to know which user is in the local Administrator Group and wich is the active User Account of our windows clients. 1. to get the local admins we use  "netgroup local Administrators" and write the output into an textfile. This is the Output.txt: ------------------------------------------------------------------------------- Aliasname Administratoren Beschreibung Administratoren haben uneingeschr„nkten Vollzugriff auf den Computer bzw. die Dom„ne. Mitglieder ------------------------------------------------------------------------------- Administrator AdminX AdminY AdminZ User Der Befehl wurde erfolgreich ausgefhrt. ------------------------------------------------------------------------------- Now there are five Members in the local Administrator group. How can we get these values into fields?  Like: localAdmin = Administrator localAdmin = AdminX localAdmin = AdminY localAdmin = AdminZ ...   2. We use "query user" to get the active user and write the output in a textfile This is the output.txt: BENUTZERNAME SITZUNGSNAME ID STATUS LEERLAUF ANMELDEZEIT >user console 1 Aktiv 1:07 26.09.2022 12:41 How can we extract these fields? Like: Benutzername = user Sitzungsname = console ID = Aktiv ...   Thank you in advance! Dominik
The splunkd health has the following message:  The percentage of non-high priority searches skipped (97%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk in... See more...
The splunkd health has the following message:  The percentage of non-high priority searches skipped (97%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance.  How do find what searches are high priority and what are non-high priority search in splunk?