All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have two queries:  1. index=A sourcetype=B  "ERROR_A" | rex field=_raw "loginid (?<login_id>\d+) ::" | deduploginid | tableloginid o/p eg:: 123 456 789   2. index=A sourcetype=B  "ERROR_B" |... See more...
I have two queries:  1. index=A sourcetype=B  "ERROR_A" | rex field=_raw "loginid (?<login_id>\d+) ::" | deduploginid | tableloginid o/p eg:: 123 456 789   2. index=A sourcetype=B  "ERROR_B" | rex field=_raw "loginid (?<login_id>\d+) ::" | dedup loginid | table loginid o/p eg:: 878 123 456 Query 1 finds all the login ID which failed because of ERROR_A and Query 2 finds all the login ID which failed because of ERROR_B. I want to find all the loginId which failed because of both ERROR_A and ERROR_B.SO expected result from above is 123 456 How can I combine both these queries given the the loginid is a extract field from raw logs.?
All, Will INDEXED_EXTRACTIONS = JSON perform the extractions on an All-in-One platform? Here is my props.conf The sourcetype was executed but none of the fields were extracted. I can see the ... See more...
All, Will INDEXED_EXTRACTIONS = JSON perform the extractions on an All-in-One platform? Here is my props.conf The sourcetype was executed but none of the fields were extracted. I can see the fields & values in _raw but they are not listed as fields. Here is what I see with an adhoc search. The time "field" within _raw is Jan 5, 2022 I did index the data on 2/22/22 but I am uncertain where the _time field came from. It matches nothing in the data.       props.conf (no transforms.conf) # created on 2/22/2022 for a test case using INDEXED_EXTRACTIONS=JSON # The non-highlighted settings are identical to a known working stanza for the exact same data [allfields_index_extracted] INDEXED_EXTRACTIONS = JSON NO_BINARY_CHECK = true LINE_BREAKER = ([\r\n]+) EVENT_BREAKER = ([\r\n]+) EVENT_BREAKER_ENABLE = true SHOULD_LINEMERGE = false TIME_PREFIX = ^"?{""?time""?: TIME_FORMAT = %s.%6N MAX_TIMESTAMP_LOOKAHEAD = 17 category = Structured description = INDEXED_EXTRACTIONS eq JSON pulldown_type = 1 # Search Time stuff # Disable search time field extractions since INDEXED_EXTRACTIONS=JSON KV_MODE = none AUTO_KV_JSON = false disabled = false  Appreciate the help!    
Hello, Need to color cells in a dashboard table based on duplicate cell values (2 or more) within the same row.  Here is the formatting code for the attached example.       <format type="... See more...
Hello, Need to color cells in a dashboard table based on duplicate cell values (2 or more) within the same row.  Here is the formatting code for the attached example.       <format type="color"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format>       Thanks and God bless, Genesius
  index=instance1 sourcetype=source1 "Invalid-Access" | fields reqId | table reqId   The above query gives me a table as below 12A 32B 34C Unable to write a query which take all this va... See more...
  index=instance1 sourcetype=source1 "Invalid-Access" | fields reqId | table reqId   The above query gives me a table as below 12A 32B 34C Unable to write a query which take all this values and search for results in different sourcetype=source2. Tried this below but not getting the results. Can anyone help ?   index=instance1 sourcetype=source2 [search index=instance1 sourcetype=source1 "Invalid-Access" | fields reqId | table reqId]    
Hi all, I'm a beginner working with splunk. I have 2 Logfiles with the same Name, but from 2 different Hosts. I would like to compare both file for an expression (e.g. "server disconected") and onl... See more...
Hi all, I'm a beginner working with splunk. I have 2 Logfiles with the same Name, but from 2 different Hosts. I would like to compare both file for an expression (e.g. "server disconected") and only get the result, when the same expression is in both file in the sime time-period (last 10 min.)  so that i could use the select for a notification. I hope you understand what i mean Thanks, Simon
Have a search result as GET https://…. | Status: 403 | Message: Forbidden | Duration: 166 | x-req-id: ssv5s-ssy67-78vshb | x-correlation-id: vsvsuj-75sys7-sbbjs7   Need to extract value of x-re... See more...
Have a search result as GET https://…. | Status: 403 | Message: Forbidden | Duration: 166 | x-req-id: ssv5s-ssy67-78vshb | x-correlation-id: vsvsuj-75sys7-sbbjs7   Need to extract value of x-req-id .   Tried this extract pairdelim="|" , kvdelim=":" ,which gives Status & Message & Duration but not able to fetch  x-req-id
Hello all,   I have a scenario where I need to make calculations regarding license consumed, per host. However, since in the license_usage log, host value was squashed and I can not fix it for pa... See more...
Hello all,   I have a scenario where I need to make calculations regarding license consumed, per host. However, since in the license_usage log, host value was squashed and I can not fix it for past events.   My theory to calculate average license consumption per host is: 1 - Calculate license used per index, per day   index=_internal source="*license_usage.log" component=LicenseUsage type=usage (idx=set1_*) | timechart useother=false limit=100 span=1d sum(b) by idx | fillnull value=0   Output Example for daily license consumption: Date set1_index1 set1_index2 set1_index3 22-02-2022 345 354 343 21-02-2022 3463 3463 234   2 - Calculate the distinct number of hosts in each index, using tstats:   | tstats values(host) as hosts, dc(host) as total_hosts where (index=set1_*) by _time,index | timechart useother=false limit=100 span=1d max(total_hosts) as "TotalHosts" by index | fillnull value=0   Output Example for number of hosts per index: Date set1_index1 set1_index2 set1_index3 22-02-2022 2 6 4 21-02-2022 4 1 2   ISSUE: The name of the columns is not static. I can only use a prefix, defined in the index naming conventions.   Objective: If I am able to divide the daily license consumption by the number of hosts, I have the average consumption per host. Doe any of you can help me find how I made divide the values in the first query by the ones in the second query, geting a similar output to the table below? Date set1_index1 set1_index2 set1_index3 22-02-2022 172,5 59 85,75 21-02-2022 865,75 3463 117     Thanks in advance for your help on this issue.    
Hi, struggling why I can't seem to get this working. I want to have an alert evaluate to true (trigger) based on if its deemed active or inactive in a lookup table.  The idea would be SPL would alway... See more...
Hi, struggling why I can't seem to get this working. I want to have an alert evaluate to true (trigger) based on if its deemed active or inactive in a lookup table.  The idea would be SPL would alway check the lookup and if the alert SPL evaluates to true, it would do it normal action. This way, we can have numerous alerts that are disabled (evaluate to false) but just updating one value in a lookup table and not clicking Disable for all alerts.  I was thinking i could do something like        index=main | append [| inputlookup AlertSample.csv where AlertName=MySampleName | fields IsOn]       this and just append the value IsOn to all the events but its not working and I have tried many variants of spl. Suggestions or a better way of doing this? Thank you! Chris
Hi all, Hope you are well. I have a task about getting users'Chrome extension list with Splunk Search with queries. I couldn't figure out how can i do this. I am new on Splunk and sometimes i can as... See more...
Hi all, Hope you are well. I have a task about getting users'Chrome extension list with Splunk Search with queries. I couldn't figure out how can i do this. I am new on Splunk and sometimes i can ask too much questions to the community. Sorry about this. Thanks in advance. Best Regards.
Hi,  I am creating a time chart on the average temperature ranges (max temp - min temp) in the UK over the last 30 years.  This is my current code:    index="midas_temp" MET_DOMAIN_NAME=DLY3... See more...
Hi,  I am creating a time chart on the average temperature ranges (max temp - min temp) in the UK over the last 30 years.  This is my current code:    index="midas_temp" MET_DOMAIN_NAME=DLY3208 |eval trange=MAX_AIR_TEMP - MIN_AIR_TEMP|timechart avg(trange)    Currently the X-axis displays the years 1992-2019 as separate years but I want to convert the X-axis into months of the year (i.e. January - December). So the graph shows the average daily temperature ranges from 1992-2019 over a year interval.  Thanks 
Hello all, I'm trying to connect my indexer cluster to an on premise s3 storage. I'm using the master node to do it. I've tested the access credentials with a standalone instance outside my clu... See more...
Hello all, I'm trying to connect my indexer cluster to an on premise s3 storage. I'm using the master node to do it. I've tested the access credentials with a standalone instance outside my cluster and it works.   Now, I'm trying to use 2 different apps to declare volume and index. Like this : .../master-apps/common_indexers/local/indexes.conf #volume stanza [volume:bucket1] storageType = remote path = s3://bucket1 remote.s3.endpoint = https://mys3.fr remote.s3.access_key = xx remote.s3.secret_key = xx remote.s3.signature_version = v2 remote.s3.supports_versionning = false remote.s3.auth_region = EU .../master-apps/common_indexes/local/indexes.conf #index stanza [index1] homePath = $SPLUNK_DB/$_index_name/db thawedPath = $SPLUNK_DB/$_index_name/thaweddb coldPath = $SPLUNK_DB/$_index_name/colddb remotePath = volume:bucket1/$_index_name   When validating bundle, I have this error : <bundle_validation_errors on peer> [Critical] Unable to load remote volume "bucket1" of scheme "s3" referenced by index "index1": Could not find access_key and/or secret_key in a configuration file [Critical] in environment variables or via the AWS metadata endpoint.   I don't understand what is wrong... File precedence is respected. => ie volumes are read before indexes I verified that splunk is owner of files and has correct access to the files.   I'm out of ideas.   Thank you in advance for your suggestions. Regards,   Ema
Hello  I would like to disable warning showing in SHcluster for non admin users. or is there anyway to hide that warning showing up on DB connect UI. I read few docs in splunk but not able to fi... See more...
Hello  I would like to disable warning showing in SHcluster for non admin users. or is there anyway to hide that warning showing up on DB connect UI. I read few docs in splunk but not able to find accurate answer. So looking for forward for the same . Attaching screenshot for clarification. Thanks, Akhil Shah  
Hi All, Our client as sent the syslog data using SC4S to our dev endpoints but we are unable to see the logs in our environment. The host sending these logs are HPE non stop server BASE24   C... See more...
Hi All, Our client as sent the syslog data using SC4S to our dev endpoints but we are unable to see the logs in our environment. The host sending these logs are HPE non stop server BASE24   Could any help where these logs are missing?
I use LDAP for users. I want to restrict few users temporarily during Splunk degraded mode. May be creating local account with the name will disable their login. So I created the user using  curl -... See more...
I use LDAP for users. I want to restrict few users temporarily during Splunk degraded mode. May be creating local account with the name will disable their login. So I created the user using  curl -k -u  server:/services/authentication/users -d name=user -d password=password -d roles=user but now I delete that local account using this: curl -k -u  --request DELETE server:/services/authentication/users/users The response returns all thousands of users in LDAP. How do I limit the response to just status code?   Thanks
Hi, I'm new to Splunk. The question I want to ask is does sort like "order by" in sql for list of fields, which divide into groups first and then sort within group. For example : no time ... See more...
Hi, I'm new to Splunk. The question I want to ask is does sort like "order by" in sql for list of fields, which divide into groups first and then sort within group. For example : no time 1 2022-01-22 18:00:00.000 2 2022-01-20 18:00:00.000 2 2022-01-26 18:00:00.000 1 2022-01-21 18:00:00.000 When in sql, using command "order by no, time desc", the result is like this : no time 1 2022-01-22 18:00:00.000 1 2022-01-21 18:00:00.000 2 2022-01-26 18:00:00.000 2 2022-01-20 18:00:00.000   But in SPL, when I use command "sort str(no), -str(time)", the result is this : no time 2 2022-01-26 18:00:00.000 1 2022-01-22 18:00:00.000 1 2022-01-21 18:00:00.000 2 2022-01-20 18:00:00.000   Is sort different from order by in sql or just my command is wrong? Thank you very much for answering my question! 
----------------------- DISK INFORMATION ---------------------------- DISK="/dev/sda" NAME="sda" HCTL="0:0:0:0" TYPE="disk" VENDOR="3PARdata" SIZE="120G" SCSIHOST="0" CHANNEL="0" ID="0" LUN="0" BO... See more...
----------------------- DISK INFORMATION ---------------------------- DISK="/dev/sda" NAME="sda" HCTL="0:0:0:0" TYPE="disk" VENDOR="3PARdata" SIZE="120G" SCSIHOST="0" CHANNEL="0" ID="0" LUN="0" BOOTDISK="TRUE" DISK="/dev/sdb" NAME="sdb" HCTL="0:0:0:1" TYPE="disk" VENDOR="3PARdata" SIZE="300G" SCSIHOST="0" CHANNEL="0" ID="0" LUN="1" BOOTDISK="FALSE" DISK="/dev/sdc" NAME="sdc" HCTL="0:0:1:0" TYPE="disk" VENDOR="3PARdata" SIZE="120G" SCSIHOST="0" CHANNEL="0" ID="1" LUN="0" BOOTDISK="TRUE" DISK="/dev/sdd" NAME="sdd" HCTL="0:0:1:1" TYPE="disk" VENDOR="3PARdata" SIZE="300G" SCSIHOST="0" CHANNEL="0" ID="1" LUN="1" BOOTDISK="FALSE" DISK="/dev/sde" NAME="sde" HCTL="7:0:0:0" TYPE="disk" VENDOR="3PARdata" SIZE="120G" SCSIHOST="7" CHANNEL="0" ID="0" LUN="0" BOOTDISK="TRUE" DISK="/dev/sdf" NAME="sdf" HCTL="7:0:0:1" TYPE="disk" VENDOR="3PARdata" SIZE="300G" SCSIHOST="7" CHANNEL="0" ID="0" LUN="1" BOOTDISK="FALSE" DISK="/dev/sdg" NAME="sdg" HCTL="7:0:1:0" TYPE="disk" VENDOR="3PARdata" SIZE="120G" SCSIHOST="7" CHANNEL="0" ID="1" LUN="0" BOOTDISK="TRUE" DISK="/dev/sdh" NAME="sdh" HCTL="7:0:1:1" TYPE="disk" VENDOR="3PARdata" SIZE="300G" SCSIHOST="7" CHANNEL="0" ID="1" LUN="1" BOOTDISK="FALSE" My multiline event log looks like this in Splunk.  Could someone please help me extract all the fields like DISK, NAME, HCTL TYPE, VENDOR, SIZE using  SPL...
Hello! My question is: When I send logs into the Splunk Cloud platform, where exactly do they go? Are they also stored in buckets on an indexer, and if so, how many indexers?
Hey, I have a rule, that report to me each time source stop sending logs to my splunk. I try to make an exception, that when a specific source from a specific host will stop sending logs, it wont... See more...
Hey, I have a rule, that report to me each time source stop sending logs to my splunk. I try to make an exception, that when a specific source from a specific host will stop sending logs, it wont trigger an alert. for example: i will get alerts from host=* source=* but not when its host=windows31 source=application   Is it possible to do that? because i try to work on it for a few days already.  
Hi there,  I'd like to custom the color of my spinners based on a specific value. The values are not numeric but string instead (High, medium, low). I know there is an option to set the color direc... See more...
Hi there,  I'd like to custom the color of my spinners based on a specific value. The values are not numeric but string instead (High, medium, low). I know there is an option to set the color directly inside the search query but I don't know how to use it and I can't access to the documentation. My spinners must look like below image but in only 1 panel instead of 3 :  Hence, color has to change according to the level (high, medium, low). Thank you, Big Big Shak  
I update an released app. When I install by local file on the portal and returned to the home page, the logo of my app was missing, so it is with the logo at the bar when I jumped into my app.  I f... See more...
I update an released app. When I install by local file on the portal and returned to the home page, the logo of my app was missing, so it is with the logo at the bar when I jumped into my app.  I found that if I restart the splunkd process and login again, the logo display well. I got confused with that.