All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, My requirement is if the field "fields.summary" contains events that contain ".DT", then I want to create a new field "Summary", and set the value of the new field as Security Incident. I hav... See more...
Hello, My requirement is if the field "fields.summary" contains events that contain ".DT", then I want to create a new field "Summary", and set the value of the new field as Security Incident. I have created the below query, but its not working as expected.  index="main" AND source=jira | spath | eval summary=if(match (fields.summary,".DT-"),"Security Incident","no") Please advise. -- Thanks, Siddarth
I am trying to create an alert that triggers when the location field of a login event from a user changes. so if a user logged in from London earlier and then the next login comes from Dublin, I want... See more...
I am trying to create an alert that triggers when the location field of a login event from a user changes. so if a user logged in from London earlier and then the next login comes from Dublin, I want an alert to trigger. The login event has a username and client.geoLocation.city field.
I run large searches at the start of each month. Generally I use the saved search commands to retrieve the results on dashboards - e.g. | savedsearch report_name. However, we sometimes use outputlook... See more...
I run large searches at the start of each month. Generally I use the saved search commands to retrieve the results on dashboards - e.g. | savedsearch report_name. However, we sometimes use outputlookup at the end of the search and inputlookup to retrieve the data on the dashboard - e.g. | outputlookup report_file.csv.  I have recently had some issues with saved search: jobs being deleted that causes my saved searches disappear For saved searches to be refreshed the report needs to rescheduled and run again Odd behaviour with reports running but data not actually being picked up by dashboards These issues do not apply to outputlookup reports which can more easily be re-run and also can easily be edited with lookup editor if required. Can anybody tell me which is more efficient to use and should be the default option? Are there any advantages and disadvantages to either command I have not considered?  
I want to run some commands on my splunk Heavy forwarder servers and output the results to a folder. I want to monitor these folders and push the data to Splunk indexers. Is my only option installing... See more...
I want to run some commands on my splunk Heavy forwarder servers and output the results to a folder. I want to monitor these folders and push the data to Splunk indexers. Is my only option installing Universal forwarders on the same server or configuring inputs and outputs.conf ?
Requirement : Call REST APIs and ingest the data into Splunk to specified indexes As of now, we are using Splunk Add on Builder Application to create apps for REST API calls and importing the data ... See more...
Requirement : Call REST APIs and ingest the data into Splunk to specified indexes As of now, we are using Splunk Add on Builder Application to create apps for REST API calls and importing the data into Splunk. Limitation with this approach : We are not able to call an API dynamically  or on ad-hoc basis only when needed. Team wants to have an UI to call the REST APIs dynamically and show this data into a dashboard. Is there any way in Splunk to provide this capability ?
i have 2 csv file  first one has name and id second one has the id only i can extract the common id but i couldn’t find the query to show the common name using the id any body can help please?!
Hello, After upgrading Splunk version from 8.1.5 to 9.0 we are getting indexing not ready error in Splunk deployment server.  Anything we need to perform in indexer clustering. What is the soluti... See more...
Hello, After upgrading Splunk version from 8.1.5 to 9.0 we are getting indexing not ready error in Splunk deployment server.  Anything we need to perform in indexer clustering. What is the solution? Can anyone help.  
Hi Community,   I have a search query where I am trying to get values for the search from the results of another query. index=_internal [ `set_local_host`] source=*license_usage.log* type="Usage" ... See more...
Hi Community,   I have a search query where I am trying to get values for the search from the results of another query. index=_internal [ `set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | search pool = "*" | search h = hp742srv OR dell970srv OR dell428srv OR hp548srv OR dell429srv OR dell477srv OR dell433srv | timechart span=1d sum(b) AS volumeB by idx fixedrange=false limit=30 | join type=outer _time [ search index=_internal [ `set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)]  The search statement in line number 9 has a list of host names which I have entered manually using the OR operator. The below query can generate the list of results but I am not able to use the result in the above query. index=mx_logs "mx.env"="dell1192srv.fr.mx.com:15022" | table host | dedup host  How can I use the results from the 2nd query dynamically in the first SPL query? Thanks in advance.   Regards, Pravin
From AWS storage we are already getting data into a territory specific instance.(example :Singapore-On-prem). Now i want the same data in Singapore instance as well as in global instance(Cloud). Ho... See more...
From AWS storage we are already getting data into a territory specific instance.(example :Singapore-On-prem). Now i want the same data in Singapore instance as well as in global instance(Cloud). How can i do this? Can anyone suggest any solution and if there is a solution , then what could be the potential roadblocks that i might face while trying the solution.
Hi all, I need some help sorting an eval field by one of it's components per below. ...   | eventstats count(ID) AS countID by severity, name | eval name_count=name." (".countID.")" | stats val... See more...
Hi all, I need some help sorting an eval field by one of it's components per below. ...   | eventstats count(ID) AS countID by severity, name | eval name_count=name." (".countID.")" | stats values(name_count) AS Signatures count by severity   This gives me something like... severity       Signatures Critical        asig0 (34)                        bsig1 (2)                        csig2 (76) High             asig3 (1)                       bsig4 (23)                       csig5 (22) What I want... severity      Signatures Critical        csig2 (76)                        asig0 (34)                        bsig1 (2) High              bsig4 (23)                        csig5 (22)                        asig3 (1) Is there any way I can sort the Signatures column by the values in the countID field? Thanks in advance!
Hi, I'm trying to extract string "domain.com" from <mail@domain.com> How can i extract string between "@" and ">" ? Thx
Hi i am using palo-alto firewall. i am getting firewall logs to syslog server and monitoring those logs and forwarding to indexer and to search head. I want to exclude events from particular src_... See more...
Hi i am using palo-alto firewall. i am getting firewall logs to syslog server and monitoring those logs and forwarding to indexer and to search head. I want to exclude events from particular src_ip from indexing as the src is generating high volume of logs and consuming my license. How to exclude these events. Please let me know.  Thanks
I have 4 Single Values that show different values, and I want to be able to click on each of them and then bring up a table below showing my information. I currently have this set for one of the sing... See more...
I have 4 Single Values that show different values, and I want to be able to click on each of them and then bring up a table below showing my information. I currently have this set for one of the single values, which shows the number of failed MFA challenges. So when that value is clicked, a table opens up to display the account id, email address, another id number and a timestamp. Here is the code for the single value:     index=keycloak "MFA" | regex _raw="MFA challenge failed" | stats count     and here is the code I have for the statistics table that opens when the single value is clicked:     index=keycloak "MFA" | eval ONE="$failed$" | rex "account\s+(?<account>\w+)\s+with\s+email\s+(?<email>[^ ]+)\s+\w+\s+\w+\s+\w+\s+\w+\s+(?<keycloak_id>[a-z,0-9,-]+)" | where isnotnull (account) | table account, email, keycloak_id, _time     The eval ONE="$failed$" is corresponding to the drilldown editor for the single value, which is as follows: On Click: Manage tokens on this dashboard Set failed = $click.value2$
Requirement is that we have a dropdown with a list of options. One of the option is all. I have a search query which will try to fetch events based on the selected values. Now I want to group them by... See more...
Requirement is that we have a dropdown with a list of options. One of the option is all. I have a search query which will try to fetch events based on the selected values. Now I want to group them by name and display individual panel for every name in the dropdown. Example below: Dropdown :- UAE, USA, India, Australia, UK, ALL Search query :- index=population name=<$dropdownvalue> | timechart count sum(people) span=1d Expectation: When I select name as UAE, panel displays timechart related to UAE population. However when option ALL is selected, I want to display 5 panels with each panel displaying timechart of specific country population. Is that feasible. Tried searching all articles and splunk documentation with no luck. 
Hi In my dashboard, I use a search with 2 different ways 1) a inline search which works fine 2) a scheduled search which is exactly the same that the inline search but which returns any results... See more...
Hi In my dashboard, I use a search with 2 different ways 1) a inline search which works fine 2) a scheduled search which is exactly the same that the inline search but which returns any results even if the search ended correctly ! NB : this search was returning results at the beginning so it's very strange I dont know if it's important but when I have a look at the job inspector I have the message below :     info : [subsearch]: Your timerange was substituted based on your search string     and what is even stranger is that when I run the search apart (it means outside the dashboard) I have also no results! how is it possible please? thanks  
Unable to setup controller for trial account.
I'm trying to get the App Agent running on a Windows ECS Fargate container. Agent installs and connects to the coordinator and registers the machine agent but the App Agent and CLR are not discovered... See more...
I'm trying to get the App Agent running on a Windows ECS Fargate container. Agent installs and connects to the coordinator and registers the machine agent but the App Agent and CLR are not discovered/registered. On a regular Windows Server instance doing 'iisreset' usually resolves this issue. In the container startup script there is an iisreset but it's obviously not working. I get the same result if I build/run the container locally on a Windows Server, so Fargate is not the issue. config.xml <?xml version="1.0" encoding="utf-8"?> <appdynamics-agent xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <controller host="AppDynamicsAppHostName" port="443" ssl="true" enable_tls12="true"> <application name="AppDynamicsAppName" /> <account name="AppDynamicsAppAccountName" password="AppDynamicsAppAccountPassword" /> </controller> <machine-agent /> <app-agents> <IIS> <applications> <application path="/" site="api"> <tier name="AppDynamicsTierName" /> </application> </applications> </IIS> </app-agents> </appdynamics-agent> When the container registers with the coordinator, it does NOT assign the agent to the Application Tier.  As far as I can tell from logging, there are no errors or issues during install or when container starts. What I don't understand and what isn't discussed in the documents is how AppDynamics determines whether the App agent is installed and where I need to check (logs/xml/config) to find any misconfiguration.
I am trying to create an alert for multiple failed logins but my query doesn't seem to work. The alert is detailed in the image attached, and the query is: index="authenticate" eventType="user.... See more...
I am trying to create an alert for multiple failed logins but my query doesn't seem to work. The alert is detailed in the image attached, and the query is: index="authenticate" eventType="user.session.start" outcome.result="FAILURE" | stats count by actor.alternateId Please help correct the query.
I have a indexer cluster and I have a Search head where ITSI is installed. I am planning on upgrading my Splunk environment. Do I have to perform any precautions in my ITSI instance?  
Hi Everyone, Explaining the installation scenario & requirement first so that the question would make a better sense. Installation - Standalone Splunk Enterprise installed on TEST01 server. Sta... See more...
Hi Everyone, Explaining the installation scenario & requirement first so that the question would make a better sense. Installation - Standalone Splunk Enterprise installed on TEST01 server. Standalone Splunk Enterprise installed on PT01 server. Task - Forward/Route data from a specific folder on TEST01 to PT01. All the rest of data should reside on TEST01 only and should be searchable. This is a business requirement with me. I tried adding [tcpout:PT01] to outputs.conf and _TCP_ROUTING to a [monitor] stanza for that folder on our TEST01 but that ended up sending all the data from TEST01 to PT01 instead of sending just that specific data. To try a different approach I worked to add transforms, props & outputs .conf files according to this doc - Route and filter data but that didn't helped and apparently induced some instability on TEST01 Splunk Enterprise Installation as it was not able to stop and start correctly. Any guidance on how I can achieve this would be very much helpful