All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  My issue is :  I want to link 2 different CSVs that have 1 column in common.  My first csv :  account id year month total aaa 111 2020 jan 445   My ... See more...
Hi,  My issue is :  I want to link 2 different CSVs that have 1 column in common.  My first csv :  account id year month total aaa 111 2020 jan 445   My second csv :  account_num platform 111 zzz   The values of the header id in the first CSV corresponding to th value of account_num in the second CSV. What I want is to link both csv with the id/account_num so that platform values (2nd csv) match to the right account (1st csv). Is there a way to do this?  Thanks !
Hello, I wanted to setup alert in Splunk cloud  for windows machines when CPU% is greater than 90.  Please do  help how to set up the same. My query is not working properly as expected. index="in... See more...
Hello, I wanted to setup alert in Splunk cloud  for windows machines when CPU% is greater than 90.  Please do  help how to set up the same. My query is not working properly as expected. index="index1" host=windows1 source="WMI:ProcessesCPU" | WHERE NOT Name="_Total" | WHERE NOT Name="System" | WHERE NOT Name="Idle" | streamstats dc(_time) as distinct_times | head (distinct_times == 1) | stats latest(PercentProcessorTime) as CPU% by Name | sort -ProcessorTime |eval AlertStatus=if('CPU%'> 90, "Alert", "Ignore") |search AlertStatus="Alert" Inputs.conf file configurations [WMI:ProcessesCPU] interval = 60 wql = SELECT Name, PercentProcessorTime, PercentPrivilegedTime, PercentUserTime, ThreadCount FROM Win32_PerfFormattedData_PerfProc_Process WHERE PercentProcessorTime>0 disabled = 0
hi we have Splunk Add-on for Microsoft Office 365 running on heavy forwarder what is the best way to do data validation ? how we can see the API calls for below inputs [splunk@ilissplfwd06 lo... See more...
hi we have Splunk Add-on for Microsoft Office 365 running on heavy forwarder what is the best way to do data validation ? how we can see the API calls for below inputs [splunk@ilissplfwd06 local]$ cat inputs.conf [splunk_ta_o365_management_activity://AuditAD] content_type = Audit.AzureActiveDirectory index = o365_management_activity interval = 300 tenant_name = o365 number_of_threads = 8 sourcetype = o365:management:activity start_by_shell = false disabled = 0 [splunk_ta_o365_management_activity://AuditSharePoint] content_type = Audit.SharePoint index = o365_management_activity interval = 300 tenant_name = o365 number_of_threads = 8 sourcetype = o365:management:activity [splunk_ta_o365_management_activity://AuditGeneral] content_type = Audit.General index = o365_management_activity interval = 300 tenant_name = o365 number_of_threads = 8 sourcetype = o365:management:activity [splunk_ta_o365_management_activity://AuditExchange] content_type = Audit.Exchange index = o365_management_activity interval = 300 tenant_name = o365 number_of_threads = 8 sourcetype = o365:management:activity [splunk_ta_o365_service_status://ServiceStatus] content_type = CurrentStatus index = o365 interval = 300 tenant_name = o365 [splunk_ta_o365_service_message://ServiceMessage] index = o365 interval = 300 tenant_name = o365 [splunk_ta_o365_management_activity://DLPAll] content_type = DLP.All index = o365_management_activity interval = 300 tenant_name = o365 number_of_threads = 8 [splunk@ilissplfwd06 local]$
Hi All, I was wondering if there are any documentation are best practices for moving an indexer cluster to a new subnet. I already checked conf files for IP addresses instead of DNS names, e.g.... See more...
Hi All, I was wondering if there are any documentation are best practices for moving an indexer cluster to a new subnet. I already checked conf files for IP addresses instead of DNS names, e.g. server.conf for master_URI I am just wondering how the cluster will react to the change. I remember when I add new cluster peers I always entered the DNS name, but within the internal communication inside the cluster I have the feeling splunk is using the IP. Same with distributed search, when I check it on the SH (Index Cluster SH) the master is providing the IP not the DNS name. Otherwise I would assume it like an upgrade? 1. Put Master in maintenance mode 2. Take peer offline, Change subnet 3. Start it up again 4. move forward with next peer on the same side Or do I have to change all cluster peers on the same side at once? So I would appreciate any help or hint on this. PS: I assume that all firewall rules have been changed to allow communication between old and new subnet, since I am not the one doing this. Thank you David
Hello  I have a log like below,which is having JSON object FEATURES=[ { "featureName":"TOKEN_VALIDATION", "addedIn":"1.0.7", "description":"This feature is used to Validate the JWT token" ... See more...
Hello  I have a log like below,which is having JSON object FEATURES=[ { "featureName":"TOKEN_VALIDATION", "addedIn":"1.0.7", "description":"This feature is used to Validate the JWT token" }, { "featureName":"REQUETS_VALIDATION", "addedIn":"1.0.7", "description":"This feature is used to Validate request URL" }, { "featureName":"REQUEST_PAYLOAD_VALIDATION", "addedIn":"1.0.7", "description":"This feature is used to Validate request body" }, { "featureName":"RESPONSE_PAYLOAD_VALIDATION", "addedIn":"1.0.7", "description":"This feature is used to Validate response body" }, { "featureName":"AOP", "addedIn":"1.0.6", "description":"This feature is used to check method execution time" }, { "featureName":"TIBCO_COMMUNICATOR", "addedIn":"1.0.8", "description":"This feature is used to connect Benefits service " }, { "featureName":"SECRETS_SECURE", "addedIn":"1.0.7", "description":"This feature is used to Validate" } ] I want the out put table should be as shown in below featureName                                                 addedIn                             description TOKEN_VALIDATION                                      1.0.7                           This feature is used to Validate the JWT token REQUETS_VALIDATION                                 1.0.7                           This feature is used to Validate request URL REQUEST_PAYLOAD_VALIDATION           1.0.7                           This feature is used to Validate request body RESPONSE_PAYLOAD_VALIDATION        1.0.7                           This feature is used to Validate response body AOP                                                                        1.0.6                    This feature is used to check method execution time TIBCO_COMMUNICATOR                             1.0.8                     This feature is used to connect Benefits service SECRETS_SECURE                                          1.0.7                      This feature is used to Validate SECRETS
I have one column with data entries like below - column_1 a=123, b= 888, c=645,d=6328 a=6734, f=876, h=6666   I want the result as - field_1             field_2 a                            12... See more...
I have one column with data entries like below - column_1 a=123, b= 888, c=645,d=6328 a=6734, f=876, h=6666   I want the result as - field_1             field_2 a                            123 b                             888 c                             654 d                             6328 a                              6734 f                                876 h                               6666 Thanks
Hi We have an issue that sometimes we get very large files or a host produces too much data and we need to stop it coming in. By the time we notice too much "bad data" has been sent. Is it possible... See more...
Hi We have an issue that sometimes we get very large files or a host produces too much data and we need to stop it coming in. By the time we notice too much "bad data" has been sent. Is it possible to dynamically stop the data via forwarder or via indexers or "somehow" when an alert is thrown? Thanks in advance Robert  
Since the upgrade from v8.0.1 to v8.0.5, the splunk alert email did not work. We have the alert_actions.conf configured and nothing has changed in it. In troubleshooting the problem i tested the sen... See more...
Since the upgrade from v8.0.1 to v8.0.5, the splunk alert email did not work. We have the alert_actions.conf configured and nothing has changed in it. In troubleshooting the problem i tested the sendemail command through the search and an error returned: "command="sendemail", character mapping must return integer, None or unicode while sending mail to: email@domain.com" Anyone having the same issue?
Hi We are using Splunk enterprise version 8.3 in our environment. System logs forwarding to local indexer & another HF to indexer. As per document reference we modified index as  our company name,b... See more...
Hi We are using Splunk enterprise version 8.3 in our environment. System logs forwarding to local indexer & another HF to indexer. As per document reference we modified index as  our company name,but still name showing as _index , where to modify index name as our company name..Please guide.
Hi Splunk Users, I'm working to implement some specialdays in a StateSpaceForecast model and I was hoping to add days like easter and christmas into the model. I was wondering if there are resource... See more...
Hi Splunk Users, I'm working to implement some specialdays in a StateSpaceForecast model and I was hoping to add days like easter and christmas into the model. I was wondering if there are resources in splunk that give me the exact date of easter for the current year, of would I have to import this information from an external source (and put it in a lookup)? Any ideas are welcome Cheers, Roelof
We have a wonderful set of end-users who can enter dates in various formats. Data sample is like   reportName="finance" team="financeTeam" reportDate="2020-08-20" reportName="finance" team="financ... See more...
We have a wonderful set of end-users who can enter dates in various formats. Data sample is like   reportName="finance" team="financeTeam" reportDate="2020-08-20" reportName="finance" team="financeTeam" reportDate="2020-08-22" ...   The macro wanted to return the dataset for a specific date and expectation was user to enter   `getmyReport(2020-08-22)`   but some users, enter it as `getmyReport(today)` and my challenge is to ensure such weird inputs are tackled before it hits the search engine and to do directly in the raw data So is there a way, i can do a pre-processing of my inputs before I pass it to the raw search? the basic trial i've done is   |makeresults | eval reportDate=if(reportDate="2020*",reportDate,strftime(now(),"%F") | search (earliest=-7d index=xyz reportDate=$reportDate$)   but the above doesn't work. The SECOND option I've within the macro outside the base search   index=xyz | eval reportDate=if(reportDate="2020*",reportDate,strvtime(now(),"%F") | search (reportDate=$reportDate$)   This SECOND option works, But somehow I feel the second option is performance wise poor (or will Splunk automatically optimise for Splunk 6.5x onwards?) Is there a better option to pre-process macro variables before searching within _raw dataset?
Hi Experts, We are using infobip API service for email integration We have created a new HTTP Request Template. In the Request URL section, we have selected the post method and provided a URL... See more...
Hi Experts, We are using infobip API service for email integration We have created a new HTTP Request Template. In the Request URL section, we have selected the post method and provided a URL. In Authentication, we have added a username and password for authentication.  The settings here is working fine. In the Payload section, we have selected application/JSON { "from":"abc@xyz.com", "to":"test@xyz.com}", "subject":"AppDynamics", "text":"Appdynamics test Event" } Saved configuration and run tests. The test run result is success. However, in response payload it is throwing below message {"requestError":{"serviceException":{"messageId":"BAD_REQUEST","text":"Bad request"}}} Could you please help to resolve this? Regards,
Hi there, I know it sound pretty easy, but I am stuck with a dashboard which splits the events by hours of the day, to see for example the amount of events on every hours (from 00h to 23h) My reque... See more...
Hi there, I know it sound pretty easy, but I am stuck with a dashboard which splits the events by hours of the day, to see for example the amount of events on every hours (from 00h to 23h) My request is like that: index=_internal | convert timeformat="%H" ctime(_time) AS Hour | stats count by Hour | sort Hour | rename count as "SENT" Only problem with the request is that I am missing zero entries in the histogram, and I wanted to have always the 24 hours displayed (even with zero results). Any way to do this ? Hope it will help others
Hi - I'm new to Splunk I am having a performance issue that causes a timeout over longer time spans on a base search I'm performing on a dashboard that uses a join. I have tried replacing the join wi... See more...
Hi - I'm new to Splunk I am having a performance issue that causes a timeout over longer time spans on a base search I'm performing on a dashboard that uses a join. I have tried replacing the join with the suggested methods found here Here,  Here and Here. Unfortunately, I am unable to get it to work correctly and output the correct value I am getting from my join search. Perhaps this is because of the spath/rex extract commands I am using? Note my actual search uses tokens however I have replaced them with asterisks to avoid any confusion.  Any help would be much appreciated! My Code is:     index=ivr_app sourcetype="CEM-AppLog" rosterInfo | rex "^(?:[^{]*){7}(?P<my_data>.+)" | spath input=my_data output=vq path=TOD | spath input=my_data output=steps path=steps{} | spath input=my_data output=type path=type | spath input=my_data output=virtualQueue path=virtualQueue | spath input=my_data output=last_step path=steps{} | eval res = mvindex(last_step,mvcount(last_step)-1) | spath input=res output=name path=name | spath input=res output=type path=type | rex field=_raw "SN_CONTEXT_ID (?P<SN_CONTEXT_ID>[^\s]+) produced" | dedup SN_CONTEXT_ID | join type=inner SN_CONTEXT_ID[ search index=ivr_app "pipeline at completion" AND CALL_FLOW AND DNIS EXCHANGE NOT "NPS" NOT "TFRDEST" NOT TFRNUM NOT "SN_CONTACT_TYPE=Transfer" NOT "SN_TARGET_TYPE=Release" AND "SN_CONTACT_REASON=" AND SN_CALL_FLAGS="*" OR NOT SN_CALL_FLAGS="*" | dedup SN_CONTEXT_ID CONNID | foreach SN_CALL_FLAGS [ eval <<FIELD>> = if(isnull(<<FIELD>>) OR len(<<FIELD>>)==0, "NO_CALL_FLAG", <<FIELD>>) ] | search CLI="*" AND CONNID="*" AND SN_CALL_FLAGS="*" AND DNIS="*" ] | search type="Agent" | stats count as countAgent    
HI, I am trying to pass presets from timepicker as tokens in my query. EX: "BETWEEN" ,"SINCE" earliest = 23/08/2020 latest 24/08/2020 I need to pass above date when I select "BETWEEN" in timerang... See more...
HI, I am trying to pass presets from timepicker as tokens in my query. EX: "BETWEEN" ,"SINCE" earliest = 23/08/2020 latest 24/08/2020 I need to pass above date when I select "BETWEEN" in timerange picker as token in my query. Similarly when I select since,last24hrs,last7days,etc. current code used:  <input type="time" id="date" token="lowerdate" searchWhenChanged="true">         <label>Date</label>         <default>           <earliest>*</earliest>           <latest>*</latest>         </default>       </input> <search>           <queryindex = ABCD  |search DATE = $db_earliest$|table DATE</query>           <earliest>-48h@h</earliest>           <latest>now</latest>            <progress>           <eval token="db_earliest">strftime("$lowerdate.earliest$", "%Y%m%d")</eval>           <eval token="db_latest">if (match("$lowerdate.latest$","now"), strftime(now(),"%Y%m%d"), strftime("$lowerdate.latest$", "%Y%m%d"))</eval> </progress>  </search> any suggestions would be great
Hi, Below is my props.conf on my Heavy Forwarder. I have recently found that there are few JSON messages completely missed getting indexed into Splunk. It's a high transaction system. When I actual... See more...
Hi, Below is my props.conf on my Heavy Forwarder. I have recently found that there are few JSON messages completely missed getting indexed into Splunk. It's a high transaction system. When I actually check my source json logs, eg: out of 10 json payloads, 1-2 doesn't get indexed. But all the 10 json payloads are having similar content and same number of lines [dp_json] SEDCMD-strip_prefix = s/^[^{]+//g SEDCMD-dumpxml = s/(\<|\>\\r\\n).*//g SEDCMD-remove = s/\"(shippingAddress)\"\s+\:\s+{[\s\S]*?(?=\n.*?{)//g INDEXED_EXTRACTIONS=JSON NO_BINARY_CHECK = true category = Custom description = dp_json_custom disabled = false pulldown_type = true DATETIME_CONFIG = CURRENT TRUNCATE = 100000 MAX_EVENTS = 10000   I couldn't troubleshoot the splunkd.log on forwarder because I continuously get below messages in it. I can't ask the source application system to change the json payload message to rectify this error. So, I am living with this error.   08-24-2020 13:19:52.474 +1000 ERROR JsonLineBreaker - JSON StreamId:10360380474397151566 had parsing error:Unexpected character while looking for value: 'a' - data_source="D:\Logs\myjson.log", data_host="myjsonhost", data_sourcetype="dp_json"   Is there a way to get notifications if any events are missed indexing? Hope someone would ve faced same issue. Need urgent resolution as we don't want to miss any data in Splunk.   Thanks, Naresh
I have a problem with a 2nd NOT inputlookup that doesn't work.  If I break out of the 2nd inputlookup and run this within SPL it works.  For example the following search would work     index=foo s... See more...
I have a problem with a 2nd NOT inputlookup that doesn't work.  If I break out of the 2nd inputlookup and run this within SPL it works.  For example the following search would work     index=foo sourcetype=foosource [| inputlookup mystuff.csv | rename field1 AS interest | fields interest ] | search NOT interest IN ("*jump*","*sheet*","*hang*","*worry*") | table interest      however if I then move this into a lookup it ignores the CSV file and shows  me data that I have omitted     index=foo sourcetype=foosource [| inputlookup mystuff.csv | rename field1 AS interest | fields interest ] | search NOT [|inputlookup myexcludedstuff.csv | rename field1 AS interest | fields interest] | table interest        
Hello All, In my organisation, the Nessus scanner scans the Splunk servers and other application servers. Scanner found the vulnerabilities CVE-2012-4930, CVE-2012-4929 with the port 8089. Splunk se... See more...
Hello All, In my organisation, the Nessus scanner scans the Splunk servers and other application servers. Scanner found the vulnerabilities CVE-2012-4930, CVE-2012-4929 with the port 8089. Splunk servers have open SSL certs and the other application servers have Splunk UF as well. SSL Self-Signed Certificate SSL Certificate Cannot Be Trusted SSL Certificate with Wrong Hostname Transport Layer Security (TLS) Protocol CRIME Vulnerability Can anyone please share the inputs what I have to do to remove the above vulnerabilities. 1. For Splunk servers what are the changes that need to be done? 2. For application servers where UF is installed what are the changes that need to be done? 3. Or if we install the trusted SSL certs in Splunk servers is it enough to do to get remove the vulnerabilities.
my query fetches (host, incident) from subject line by using below regex command regex field=subject max_match=0 “(<Incident>INC\d{12})” | regex field=subject “(?<host>[a-z]{5}\d{3}\d[a-z]{4}\d\d)“ ... See more...
my query fetches (host, incident) from subject line by using below regex command regex field=subject max_match=0 “(<Incident>INC\d{12})” | regex field=subject “(?<host>[a-z]{5}\d{3}\d[a-z]{4}\d\d)“ my query matches host from 1st query (1st query displays host based on some eventcode) and those host search for host in subject line and displays incident in separate column.  however,  incident is not fetched for host which are in uppercase Letter in subject and incident column remains blank for particular host.    
I have events sent from a configuration management tool that may either contain a status of 'Job Started', or 'Job Completed'. My goal is to write a search that shows me events that are still in prog... See more...
I have events sent from a configuration management tool that may either contain a status of 'Job Started', or 'Job Completed'. My goal is to write a search that shows me events that are still in progress. My way of doing this is to have a search that looks for events by job ID, where there is a 'Job Started' event for that ID, but no 'Job Completed' event. Job started search is simple, and I can successfully return a list of job ID's that have an event with the status "Job Started":   index=cm_tool event_status="Job Started" | table job_id     Similar to the job started search, the job completed search is just as easy: index=cm_tool event_status="Job Completed" | table job_id   What I would like to do now, is show in a table only the job_ids that have results returned from the first search, but do not have a completed event as returned in the second search. Effectively, I'd like to see a list of unique job_id's with a started event, but no completed event. I've played around with sub-searches, however I am not having a ton of luck. How might I go about doing this?