All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Everyone, So the goal here is to auto increment / decrement a value based on the position of character present in a string. For example : Here I am trying to pull and an assign a value to R ... See more...
Hi Everyone, So the goal here is to auto increment / decrement a value based on the position of character present in a string. For example : Here I am trying to pull and an assign a value to R This works but only when the "pos" is less than 3. I would like to assign the value for each and every position. Field1 = "RFTGQOASZ"   | makeresults | field1 = "RFTGQOASZ" | eval pos = len(mvindex(split(field1,"R"),0))+1 | eval value = 5 | eval pos1 = if(pos<3,value,0)   likewise the field1 value will change every time, I would like to assign a value based on the position. so let say if the "R" character is in the middle , auto decrement the value, something like i--.
Hello, everyone! I want to configure getting data in json format through splunk db connect. Database is mysql. Is it possible?
Hello, I'm experiencing some issues on my Search Heads. I'm getting this error on the search heads: The searchhead is unable to update the peer information. Error = 'Unable to reach the cluster... See more...
Hello, I'm experiencing some issues on my Search Heads. I'm getting this error on the search heads: The searchhead is unable to update the peer information. Error = 'Unable to reach the cluster manager' for manager=https://hostname-cm1:8089. Could anyone recommend any fix or workaround?
Hi all, We are using Splunk Cloud, and I am using the https://http-inputs-mydomain.com/services/collector/raw to send a log file for ingestion. The problem is that each line in this log file ca... See more...
Hi all, We are using Splunk Cloud, and I am using the https://http-inputs-mydomain.com/services/collector/raw to send a log file for ingestion. The problem is that each line in this log file can be quite big, 25000 characters or more. Splunk Cloud is truncating at 10,000 characters. I can find steps for handling this on Splunk On-Prem for Heavy Log Forwarders etc. but doesn't seem to be addressed for the http-inputs on cloud. Any idea's on how I can change it to accept larger logs? Thanks, Chris
Hi, I am new to Splunk and struggling to create Line Graphs. I have a query which display a count for the month:       index="app" earliest=1640995200 latest=1643673600 | stats count AS Jan... See more...
Hi, I am new to Splunk and struggling to create Line Graphs. I have a query which display a count for the month:       index="app" earliest=1640995200 latest=1643673600 | stats count AS January | appendcols [search index="app" earliest=1643673600 latest=1646092800 | stats count AS February]       This is correctly presenting the information as I would expect it: I am almost certain my search is not correct, as when I attempt to plot this on a Line Graph the axis are not correct. I would like the Total number on the Y axis, and the months along the X axis.  Would appreciate any guidance. Thank you! 
Ehh, I have an annoying case. I'm monitoring a file over windows share (to make things even worse to troubleshoot is that I don't have direct access to the share from my administrative user; only t... See more...
Ehh, I have an annoying case. I'm monitoring a file over windows share (to make things even worse to troubleshoot is that I don't have direct access to the share from my administrative user; only the domain user the UF is running with has access). The file is a CSV, it's getting properly split into fields, the date is getting parsed OK. I have transforms for removing the header (and a footer - this file has some footer as well). And this works mostly well. Mostly, because every time there is data added to the file, the file is apparently getting recreated from scratch - new data is inserted before footer and I'm getting entries like 02-15-2022 10:55:23.008 +0100 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='\\path\to\the\file' Luckily, for now the file is relatively small (some 3k lines) and doesn't eat up much license compared to this customer's other sources but it's annoying that the same events are getting ingested several times during the day. The problem is that I don't see any reasonable way to avoid it. There is no deduplication functionality on input, I don't have any "buffer" I could compare it with using ingest-time eval or something like that. Any aces up your sleeves?
I am using the nginx app to ship nginx logs to Splunk, everything works well but intermittently I see a single event consisting of multiple nginx access loglines.  Nginx app itself has an EventBrea... See more...
I am using the nginx app to ship nginx logs to Splunk, everything works well but intermittently I see a single event consisting of multiple nginx access loglines.  Nginx app itself has an EventBreaker=enabled and Eventbreaker=regex. (This doesn't work 10-20% of the time). Can someone please help or am I missing something? My inputs.conf : [monitor:///var/log/nginx-access.log] index = artifactory disabled = false source = nginx-access sourcetype = nginx:plus:kv [monitor:///var/log/nginx-error.log] disabled = false sourcetype = nginx:plus:error index = artifactory source = nginx-error. Nginx app has already created props.conf at Search head cluster.
Hi All, I have one dashboard with multiple panels and its taking too much of time to load. I am trying to implement base search and sub search. I have one doubt in implementing it. I have the qu... See more...
Hi All, I have one dashboard with multiple panels and its taking too much of time to load. I am trying to implement base search and sub search. I have one doubt in implementing it. I have the queries with common until index,sourcetype and source....but i need to differentiate in one code assume its transaction id...and all the remaining query seems same. For ex: index=xyz  sourcetype="dtc:hsj" tcode="1324"  ----->for few queries index=xyz  sourcetype="dtc:hsj" tcode="1324"  OR tcode="234" ------>for few queries And the remaining part is same for all the queries..Is there a way that can i configure tcode in basesearch and used the same in subsearch Thanks in Advance
is there a way to execute the following process of the OS? ??   -Cluster master server (Splunk Enterprise installed) / Usr / bin / eu-stack / Usr / bin / iostat / Usr / bin / netstat ... See more...
is there a way to execute the following process of the OS? ??   -Cluster master server (Splunk Enterprise installed) / Usr / bin / eu-stack / Usr / bin / iostat / Usr / bin / netstat / Usr / bin / ps / Usr / bin / strace / Usr / sbin / lsof / Usr / sbin / tcpdump ・ Search head server (Splunk Enterprise and Splunk Enterprise Security installed) / Usr / bin / eu-stack / Usr / bin / iostat / Usr / bin / netstat / Usr / bin / ps / Usr / bin / strace / Usr / bin / uname / Usr / sbin / lsof / Usr / sbin / tcpdump -Deployment server (Splunk Enterprise installed) / Usr / bin / eu-stack / Usr / bin / iostat / Usr / bin / netstat / Usr / bin / ps / Usr / bin / strace / Usr / sbin / lsof / Usr / sbin / tcpdump
I am looking for one requirement, can anyone please help us. i want to append a inputlookup table to my main table with the same column names and field names. Here is my main search results. ... See more...
I am looking for one requirement, can anyone please help us. i want to append a inputlookup table to my main table with the same column names and field names. Here is my main search results. Here is my inputlookup results Desired Output:  
hi I would like to know if it is possible to display automatically a chart radar from a lookup? radar.csv is the result of a scheduled search there is 3 fields in this csv : "sig_app" which cor... See more...
hi I would like to know if it is possible to display automatically a chart radar from a lookup? radar.csv is the result of a scheduled search there is 3 fields in this csv : "sig_app" which correspond to the radar "key" field, sig_cat which correspond to the radar "axis" field and count which correspond to the radar "value" field is it possible to do this or not?  thanks   | inputlookup radar.csv | eval sig_app=key | eval sig_cat=axis | eval count=value | eval key="Actions", AAA=.37, BBB=8.64, CCC=2.56, DDD=1.68, EEE=4.992 | untable key,"axis","value" | eval keyColor="magenta"  
Hi, I have a column chart that shows 1 field, but filters by others. I want the columns to be different for each value of a specific selection field in the input, but I can not set for each value... See more...
Hi, I have a column chart that shows 1 field, but filters by others. I want the columns to be different for each value of a specific selection field in the input, but I can not set for each value its color in the first place because I have something like 950 values (maybe more). So if the user chooses to look at 3 values in the filter, he will see each value in a different color and know the color of each value. I did not find anything like it, I just saw that the color should be set to each value from the beginning. Can you help me?
Here is the original log file: Host availabilty Hashmap is {HKL20167984SIT_13_8225=true, HKL20167984SIT_7_82FB=true, HKL20167984SIT_2_82F6=true, HKL20167984SIT_16_8228=true, HKL20167984SIT_1_82F5=t... See more...
Here is the original log file: Host availabilty Hashmap is {HKL20167984SIT_13_8225=true, HKL20167984SIT_7_82FB=true, HKL20167984SIT_2_82F6=true, HKL20167984SIT_16_8228=true, HKL20167984SIT_1_82F5=true, HKL20167984SIT_11_8223=true, HKL20167984SIT_14_8226=true, HKL20167984SIT_4_82F8=true, HKL20167984SIT_12_8224=false, HKL20167984SIT_3_82F7=true, HKL20167984SIT_15_8227=true, HKL20167984SIT_8_8220=true, HKL20167984SIT_9_8221=true, HKL20167984SIT_6_82FA=true, HKL20167984SIT_5_82F9=true, HKL20167984SIT_10_8222=true} Here's my search command index="hkcivr" source="/appvol/wlp/DIVR01HK-AS01/applogs/wrapup.log*" | rex max_match=0 "_(?<port status>\d{4}\=\w+)" I hope to get the result like below: Time   2022-02-15 07:02 8225=false, 8228=false, 8223=false, 8226=false, 8224=false, 8220=false, 8227=false, 8221=false, 8222=false, 8225=false, 8228=false, 8223=false, 8226=false, 8224=false, 8220=false, 8227=false, 8221=false 8222=false
Hi.  I want to merge data from multiple fields into a single field. If you have a table like the following fieldA, fieldB, fieldC ------------------------------ valueA, valueB, valueC The e... See more...
Hi.  I want to merge data from multiple fields into a single field. If you have a table like the following fieldA, fieldB, fieldC ------------------------------ valueA, valueB, valueC The expected output is as follows. I want to combine them into a single field in the Field = Value format. merge_data = "fieldA = valueA, fieldB = valueB, fieldC = valueC" I think it can be done using multivalue OR foreach, but I don't know how to code it. Thanks in advance!!    
Hi All, I had registered for Splunk Phantom community edition a few days back. However, I am yet to receive a confirmation email from the team. Could someone help me out here?
Hi, Trying to configure the Add-On for Microsoft Defender https://splunkbase.splunk.com/app/4959/ Can anyone confirm what settings are needed for: Login URL Endpoint Resource? Whichever I... See more...
Hi, Trying to configure the Add-On for Microsoft Defender https://splunkbase.splunk.com/app/4959/ Can anyone confirm what settings are needed for: Login URL Endpoint Resource? Whichever I use, I'm getting 401 errors. Have followed https://docs.microsoft.com/en-us/microsoft-365/security/defender-endpoint/api-hello-world?view=o365-worldwide and confirmed the permissions on the App registration are 100% correct.   Cheers  
Hi,  I have Percentage calculated for Compliance and Non Compliance based on the data .Now i need to segregate it based on colors in staked bar chart. Now ,the for all Non Comp % between 90 to 95... See more...
Hi,  I have Percentage calculated for Compliance and Non Compliance based on the data .Now i need to segregate it based on colors in staked bar chart. Now ,the for all Non Comp % between 90 to 95 should be shown in yellow and rest should be in red .The  Non compliance is 95 to 100. Please let me know the search for this .My search | savedsearch_Saved_Search | eval total_count=Compliant+NonCompliant | eval "Compliance %"=round(100*'Compliant'/total_count,2) | eval "Non Compliance %"=round(100*'NonCompliant'/total_count,2) |stats count by msc "Compliance %" "Non Compliance %"
Hi all, I have a query which gives this kind of table. Name        Date              Status           Task          SubGroup A             14-02-22         PASS                 a               ... See more...
Hi all, I have a query which gives this kind of table. Name        Date              Status           Task          SubGroup A             14-02-22         PASS                 a                  a1                                                                          b                  b1                                                                                               b2 The data will come together and which but i want separate rows for all the data. Also there are subgroup for some tasks but with this result it one cannot be able to differentiate between them. I have tried using mvzip like this, ...............| eval tmp=mvzip(mvzip(Name,Task,","),SubGroup,",") | mvexpand tmp | table Name Date Status tmp |eval Name=mvindex(split(tmp,","),0)|eval Task=mvindex(split(tmp,","),1)|eval SubGroup=mvindex(split(tmp,","),2) |table Name Date Status Task SubGroup I am not getting why a error comes in eval command as expected ). I don't know whether it is a small mistake, i have tried alot but not able to solve this.
I think savedsearches.conf contains information about alerts and reports. If you execute the following btool command and check the result, which is the report or the alert? I can't tell. if i use s... See more...
I think savedsearches.conf contains information about alerts and reports. If you execute the following btool command and check the result, which is the report or the alert? I can't tell. if i use splunk btool savedsearches list <Question 1> From the btool results, what parameters can I look at to determine that the stanza is a report? <Question 2> From the btool results, what parameters can I look at to determine that the stanza is an alert? @somesoni2 
When Settings> "Search, Report, Alert" is displayed in SplunkWeb, ○○○ is displayed by default. I want to change the default value of this display as below.. "Type: Alert, App: Search & Reporting... See more...
When Settings> "Search, Report, Alert" is displayed in SplunkWeb, ○○○ is displayed by default. I want to change the default value of this display as below.. "Type: Alert, App: Search & Reporting, Owner: Administrator, Number of displayed pages: 100 / page"   if possible, please tell me the procedure. Thanks @isoutamo  @PickleRick