All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  I've recently had to take an indexer offline while I worked on storage so I ended up putting it into quarantine  until things were resolved. Now that things are resolved, I can't seem the indexer ... See more...
  I've recently had to take an indexer offline while I worked on storage so I ended up putting it into quarantine  until things were resolved. Now that things are resolved, I can't seem the indexer is receive data but I still have the below error within the monitoring console: "One or more peers has been excluded from the search because they have been quarantined. Use "splunk_server=*" to search these peers. This might affect search performance."       My cluster manager now sees both indexers (01 and 02) in this group but the there are still errors suggesting the 02 indexer is still quarantined. The indexer02 was the one quarantined which is now receiving data and shows up in the monitoring console but with  the above error Any advice on how to unquarantine this indexer or resolve this message? I've tried to fiddle around with this DOC but I can't seem to find the correct syntax for the indexer https://docs.splunk.com/Documentation/Splunk/8.0.6/DistSearch/Quarantineasearchpeer   Thanks, Sean    
Our department has created a Splunk integration that performs API lookups against IPQualityScore.  One of our searches was augmented with the returned data by adding extra fields for context (fraud_s... See more...
Our department has created a Splunk integration that performs API lookups against IPQualityScore.  One of our searches was augmented with the returned data by adding extra fields for context (fraud_score, recent_abuse, vpn, tor, etc.).  Unfortunately, the integration doesn't keep the results, so I used outputlookup to store them.  While we have a reasonable amount of API queries, I was trying to figure out a way to check if an IP was previously used, append the results and prevent a duplicate lookup if it was.  Otherwise, perform the lookup and append the results. This was the query that was used:   index=*_auth sourcetype="azure:aad:signin" userAgent=CBAInPROD riskEventTypes_v2{}=* 8.8.8.8 | rename authenticationDetails{}.succeeded as loginStatus,authenticationDetails{}.authenticationMethod as mfaAuthMethod,authenticationDetails{}.authenticationMethodDetail as mfaAuthDetail,status.additionalDetails as mfaResult,status.failureReason as failureReason,location.city as city,location.state as state,location.countryOrRegion as country,ipAddress as SourceIP,location as Location,userAgent as UserAgent,appDisplayName as Application,riskState as RiskState,riskEventTypes_v2{} as RiskEventType,riskLevelAggregated as RiskLevel,riskLevelDuringSignIn as SignInRisk,conditionalAccessStatus as Status | eval mfaAuthDetail=if(mfaAuthDetail="","-",mfaAuthDetail),city=if(city=="","N/A",city),state=if(state=="","N/A",state),country=if(country=="","N/A",country),Location=city.", ".state.", ".country,User=if(isnull(userDisplayName),userPrincipalName,userDisplayName) | fillnull value="-" | stats count by User,SourceIP,Location,Application,UserAgent,RiskEventType,RiskState,RiskLevel,SignInRisk,Status | fields - count | lookup ipqs clientip as SourceIP | outputlookup append=true override_if_empty=false ipqs.csv | rename bot_status as Bot,city as City,country_code as Country,fraud_score as Fraud_Score,latitude as LAT,longitude as LONG,mobile as Mobile,proxy as Proxy,recent_abuse as Abuse,success as Success,tor as TOR,vpn as VPN | table User,SourceIP,Location,Application,UserAgent,RiskEventType,RiskState,RiskLevel,SignInRisk,Status, Bot,City,Country,Fraud_Score,LAT,LONG,Mobile,Proxy,Abuse,Success,TOR,VPN | sort 0 User          
I'm using the Azure Add-on for splunk to pull in our azure AD signin, audit and user data; all is work well for the most part with the exception of some user events (sourcetype="azure:aad:user") seem... See more...
I'm using the Azure Add-on for splunk to pull in our azure AD signin, audit and user data; all is work well for the most part with the exception of some user events (sourcetype="azure:aad:user") seem to have truncated json and therefore don't parse correctly. Is there a limit setting for this that can remediate this?
Hi, Is there a rest endpoint to take a peer offline temporarily? I see one for decommissioning -      curl -k -u admin:pass https://indexer:8089/services/cluster/slave/control/control/decom... See more...
Hi, Is there a rest endpoint to take a peer offline temporarily? I see one for decommissioning -      curl -k -u admin:pass https://indexer:8089/services/cluster/slave/control/control/decommission     But seem like this is for permanent offline. We are looking to take a peer offline temporarily..
I have a log generated in splunk which will have unique id  in with pipe symbols: ex:         19:46:47.146 - [http-nio-8000-exec-9] INFO edu.test.controller |{My Var1}|{My Var2}|{myVar3}| - {lo... See more...
I have a log generated in splunk which will have unique id  in with pipe symbols: ex:         19:46:47.146 - [http-nio-8000-exec-9] INFO edu.test.controller |{My Var1}|{My Var2}|{myVar3}| - {log message}.         I need to perform a query based on {My Var1}. Also need to list (dedup) all logs best on {My Var1}.
Hello everyone,  I am trying to create a token based caption on a single value.  The value of the caption is a string based token. I want to add a break line in the caption or somehow change t... See more...
Hello everyone,  I am trying to create a token based caption on a single value.  The value of the caption is a string based token. I want to add a break line in the caption or somehow change the line.  I have tried adding /r/n, 
 and multivalue pairs but nothing seems to work.  Any kind of help is appreciated because I am starting to think that it might not be actually possible. Thank you. 
Hi Everyone, I have one requirement. I have two types of logs  1st log which contains the field as Identifier 7bb86db9-8268-19a1-0000-0000648141a2  2020-10-12 12:44:51,553 ERROR [Timer-Driven Pro... See more...
Hi Everyone, I have one requirement. I have two types of logs  1st log which contains the field as Identifier 7bb86db9-8268-19a1-0000-0000648141a2  2020-10-12 12:44:51,553 ERROR [Timer-Driven Process Thread-2] o.a.StandardProcessGroup Failed to synchronize StandardProcessGroup[identifier=7bb86db9-8268-19a1-0000-0000648141a2] with Flow Registry because could not 24579813-d2a3-4d67-892f-11b5838011af in bucket 9d407076-db0a-4587-b8e7-51be45d8c193 host = lpdosputb50088.phx.aexp.com identifier = 7bb86db9-8268-19a1-0000-0000648141a2  source = /var/log/nifi/nifi-app.log 2nd logs contains the field as ID id = 21aa3004-1d9d-1679-9d43-b623891ef191 ADS_Id = initRequest_Type = atRequest_URL = org.apache..AbstractProcessor.onTrigger(AbstractProcessor.java:27) id = 21aa3004-1d9d-1679-9d43-b623891ef191 Both id and Identifier are extracted. I have concate Identifier and id  made a new field id2. Below is my search query for that. index=abc sourcetype=xyz error | rex field=_raw "ERROR(?&lt;Error_Message&gt;.*)" | rex field=Error_Message "Failed(?&lt;Message&gt;.*)" | eval Message="Failed".Message |strcat id identifier id2|eval ClickHere= "https://abc.fgh.com/hjk/?processGroupId=".identifier | table _raw identifier id2 ClickHere | join type=outer id2[dbxquery query="SELECT id id2, parent_chain, url FROM parent_chains;" connection="SQL"]</query> <earliest>-7d@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <fields>"_raw", "identifier","parent_chain","url"</fields> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <condition field="identifier"> <link target="_blank">$row.ClickHere|n$</link> </condition> </drilldown> I have created one hyperlink ClickHere and set the drill-down condition with Identifier. Clicking on Identifier Its taking me to correct hyperlink. But I don't want to display Identifier column I want to display id2 column and clicking on id2 column it should take me to the hyperlink where its taking to me with Identifier. But for id2 its not working. Can someone guide me where I have gone wrong. Thanks in advance.
Hi All, Long time lurker, first time poster. I'm the admin of our Splunk instance and I can't see an alert my colleague (of a lesser permission'ed role) created. AFAIK, the alert is set to priv... See more...
Hi All, Long time lurker, first time poster. I'm the admin of our Splunk instance and I can't see an alert my colleague (of a lesser permission'ed role) created. AFAIK, the alert is set to private for himself - but as an admin why am I not seeing it? I can see some other peoples'. He also tried to change this alert's Sharing to be App, but is greyed out. What's needed to fix that?
I have two servers (all-in-one), one's production the other development. Sometimes, I'd like to have a forwarder send data to both. The app from production sends the usual data to just the production... See more...
I have two servers (all-in-one), one's production the other development. Sometimes, I'd like to have a forwarder send data to both. The app from production sends the usual data to just the production server. Is there a way to limit the app's scope when an app is deployed from development? Right now, it's sending data from the development app to the production server.
We just updated our application from Angular 9 to Angular 10 last week and noticed that in our End User Monitoring data, we've seen a significant increase in all of our load times for our virtual pag... See more...
We just updated our application from Angular 9 to Angular 10 last week and noticed that in our End User Monitoring data, we've seen a significant increase in all of our load times for our virtual pages (like a 5x increase on average from 3 seconds to 15+ seconds). It does not appear from an end user perspective that the page is loading any slower than it did previously, and when I look at examples of slow pages in EUM, I see large gaps in the resource waterfall where it appears nothing was going on.  Going thru the EUM documentation, I saw that Angular 9 support was introduced back in June. I'm wondering if anyone else has seen issues after upgrading to Angular 10, and if there is any ETA on official support for Angular 10? Thanks.
Hi Splunk Community, I know this question has been asked a several times over. But I don't find a desirable solution to this. We have been installing Splunk Enterprise on various virtual servers ea... See more...
Hi Splunk Community, I know this question has been asked a several times over. But I don't find a desirable solution to this. We have been installing Splunk Enterprise on various virtual servers each for a Search Head, Indexer, HF. So far we have installed more than 5 Splunk Enterprise on each Linux (RHEL) VM, following the standard installation procedure. Also keeping the splunk.secret file the same throughout. Every server is functioning normally. Except for on one server, we cannot access the splunk web interface via localhost. Note: None of the configurations have been changed. the web.conf file has the    startwebserver = 1   and    httpport = 8000     The netstat -an | grep 8000 shows that it is listening on this port   tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN     I have checked if its the firewall issue, with performing a telnet 127.0.0.1 8000, as well as telnet 0.0.0.0 8000 and it gets connected   Trying 0.0.0.0... Connected to 0.0.0.0. Escape character is '^]'.   So I believe that there is no firewall issue as well. The only valuable info I receive on using the "Escape character" after the telnet connection is   HTTP/1.1 400 Bad Request Date: Mon, 12 Oct 2020 16:44:47 GMT Content-Type: text/html; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 207 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><title>400 Bad Request</title></head><body><h1>Bad Request</h1><p>HTTP Request was malformed.</p></body></html> Connection closed by foreign host.     I can't seem to find anything on the splunkd.log too. I'm running out of options on how to debug this issue as to why the web server is not loading. Please assist.   Thanks,
Good Morning, I am currently trying to extract a field from  a variable. The variable name is command, and the value the command holds is  Command = "CONNECT SPLNKUSER GROUP(QA)" What I'm trying ... See more...
Good Morning, I am currently trying to extract a field from  a variable. The variable name is command, and the value the command holds is  Command = "CONNECT SPLNKUSER GROUP(QA)" What I'm trying to do is extract the QA part and create a new variable called group. For example: Group = QA Group= Accounting Group = Payroll   Thank you, Marco
Hi, I am building a dashboard where I have an multi-select input  called locations, which is populated with a query via the dynamic options. Also, I include a static option called "ANY" with a valu... See more...
Hi, I am building a dashboard where I have an multi-select input  called locations, which is populated with a query via the dynamic options. Also, I include a static option called "ANY" with a value * I have also a token prefix and suffix of  double quotes (") and the delimiter of a coma ( , ) My purpose is later is in my query of one panel to use that token in a where IN clause  where location in ($locations$) when I dont select the ANY value, the query works as expected. where location in ("XXX", "YYY", "ZZZ") but when I include the ANY in the multi-select input, the query does not seem to work. where location in ("XXX", "YYY", "ZZZ","*") Is it possible to do what I intend? is there any easier way to achieve the same? Many thanks in advance        
getting "The Page you are trying to view does not exist. Click here to go back to the Home Page." when trying to access trial
Hello, I have following entry in my transforms.conf: [dtimes] REGEX = ^.+s4hana\.ondemand\.com (?P<DBSID>.{3}).+t0\(timeofday\):(?P<t0>.*?);dt1\(us\):(?P<dt1>.*?);dt2\(us\):(?P<dt2>.*?);dt3\(us\):(... See more...
Hello, I have following entry in my transforms.conf: [dtimes] REGEX = ^.+s4hana\.ondemand\.com (?P<DBSID>.{3}).+t0\(timeofday\):(?P<t0>.*?);dt1\(us\):(?P<dt1>.*?);dt2\(us\):(?P<dt2>.*?);dt3\(us\):(?P<dt t4>.*?);total\(us\):(?P<total>.*?)$ SOURCE_KEY=_raw FORMAT = DBSID::$1 t0::$2 dt1::$3 dt2::$4 dt3::$5 dt4::$6 total::$7 WRITE_META=true which I would expect extract the corresponding fields out of the events like the one below: [12/Oct/2020:03:56:39 +0000] 10.1.6.58 100/CB9980000122 100/CB9980000122 042457C44BD441A36E673571F0C7D1AF - "GET /sap/bc/ui5_ui5/sap/fin_lib/~D0C2FE335CFD0450BE39DFA0391E81C6~5/error/Error.js HTTP/2" 200 1081 - 2ms my303891.s4hana.ondemand.com NII vhsfhniici_NII_00 "-"TLSv1.2 t0(timeofday):1602474999.837288;dt1(us):501;dt2(us):32;dt3(us):1257;dt4(us):34;total(us):1824   As per regex101 it works fine, also the SPL search with the above rex field=_raw works fine. Unfortunately when placing it in the transforms.conf it does not. There are also the matching entries in the props.conf: [webdispatcher] TRANSFORMS-ExtractKeyFields = dtimes TRANSFORMS-ExtractKeyFields = passportID   and fields.conf: [SYSTEMDB] INDEXED = True INDEXED_VALUE = False [vhost] INDEXED = True INDEXED_VALUE = False [DBSID] INDEXED = True INDEXED_VALUE = False # ############### Extract the performance KPIs from the Webdispatcher trace [passportID] INDEXED = True INDEXED_VALUE = False [request] INDEXED = True INDEXED_VALUE = False [status] INDEXED = True INDEXED_VALUE = False [t0] INDEXED = True INDEXED_VALUE = False [dt1] INDEXED = True INDEXED_VALUE = False [dt2] INDEXED = True INDEXED_VALUE = False [dt3] INDEXED = True INDEXED_VALUE = False [dt4] INDEXED = True INDEXED_VALUE = False [total] INDEXED = True INDEXED_VALUE = False #******************************   Can anyone help? The second regex there (passportID), which is sligtly easier, works fine ... Kind Regards, Kamil  
Hello, i would need to run a python script, using splunk's universal forwarder, on the servers where the forwarder is installed. The goal of the script is to check the hostname in the input.conf fi... See more...
Hello, i would need to run a python script, using splunk's universal forwarder, on the servers where the forwarder is installed. The goal of the script is to check the hostname in the input.conf file (in $ SPLUNK_HOME / etc / system / local) so that if the server hostname and the hostname in the input.conf file are different, the script change hostname in input.conf file and restart splunk. If the hostnames are the same, it does nothing and exits the script. The script must be run every night at 2:00 AM, and I would like once the app is created it will be distributed to all servers.     import sys import fileinput import subprocess import re print("[+] Splunk Hostname Changer") def get_hostname_to_command(): hostC = subprocess.getoutput('hostname') print("[+] Result to hostname command: " + hostC) return hostC def get_hostname_to_file(path): file = open(path,"r") hostRow = file.readlines()[1] hostF = re.search(r'[^host\s\=\s][A-Za-z]*[\-]*[A-Za-z]+',hostRow).group(0) print("[+] Hostname in inputfile.conf: " + hostF) return hostF def hostname_check(hostC,hostF,path): if hostC != hostF: print("[-] Hostname mismatch. Change the hostname in input.conf!") for line in fileinput.input(path,inplace=True): line = line.replace(hostF, hostC) sys.stdout.write(line) print("[+] Hostname changed. Operation Complete!") print("[+] Restart Splunk...") subprocess.call(["C:\Program Files\SplunkUniversalForwarder\\bin\splunk.exe","restart"]) print("[+] Restart Splunk completed!") else: print("[+] Hostname match. Exit to script!") hostCommand = get_hostname_to_command() hostFile = get_hostname_to_file("C:\Program Files\SplunkUniversalForwarder\etc\system\local\input.conf") hostname_check(hostCommand,hostFile, "C:\Program Files\SplunkUniversalForwarder\etc\system\local\input.conf")     I have read some information scattered around the web and I still don't understand if this type of business is available or not. Is it possible to do this? I'm here for further informations. Best regards, Antonio
I see the EAI:ACL documentation mentions that an owner flag can be set. can this be used for getting saved searches based on the owner? so far with curl I have not gotten the results as excepted (th... See more...
I see the EAI:ACL documentation mentions that an owner flag can be set. can this be used for getting saved searches based on the owner? so far with curl I have not gotten the results as excepted (the owner is ignored) If someone knows can you please give me an example? here is my curl: curl-k -u john:Password http://localhost:8089/servicesNS/-/app/saved/searches&eai:acl.owner=john
 I just want to know how the splunk license server identify the new splunk indexer added to our environment. Any config new to change in the new added index node.
Hello Splunk Team, I have  been exploring how to connect SPLUNK with Hadoop to export large volume of data(Historical). Could you please help me and provide us the best way to export data from Splun... See more...
Hello Splunk Team, I have  been exploring how to connect SPLUNK with Hadoop to export large volume of data(Historical). Could you please help me and provide us the best way to export data from Splunk to Hadoop(HDFS). We learned while exploring that Hadoop connect would be a way but it is now your legacy product and we cannot implement that in production. We also explored Hadoop Data Roll but it can only export in particular data format. We wanted to know the best method available for exporting large volume from Splunk to HDFS. 
hi,  I have a search like this :  |rest /services/data/indexes splunk_server=local count=0 | search disabled=0 title!=_blocksignature title!=_thefishbucket | rename title AS index | fields index |... See more...
hi,  I have a search like this :  |rest /services/data/indexes splunk_server=local count=0 | search disabled=0 title!=_blocksignature title!=_thefishbucket | rename title AS index | fields index | lookup indexes.csv index OUTPUT account | search index=*xxx* The result is a table like that : index account xxx-aaa   xxx-bbb D ccc-xxx     I want to fill empty cell account with "D" account only for index containing "xxx" string. I tried an eval :  | eval account=if(index=="*xxx*","D",account)  but it doesn't work.  Can you help me ? Thanks.