All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone,    Our deployment consists of an on prem deployment server, on prem heavy forwarder and Splunk Cloud.  Is there a way of getting our separate Heavy Forwarder to recognise our deployme... See more...
Hi everyone,    Our deployment consists of an on prem deployment server, on prem heavy forwarder and Splunk Cloud.  Is there a way of getting our separate Heavy Forwarder to recognise our deployment server, specifically so we can configure file inputs on the heavy forwarder using the deployment clients.  I know they can be one in the same but we chose to have separate servers for operational reasons. Appreciate anyone's view/help
Hi team I have problems when i monitoring many UF (~ 400 agents) with Distributed Architechture  (UF --> HF --> Indexer) as below: 1. When a new UF agent connect to Deployment Server, i can't know ... See more...
Hi team I have problems when i monitoring many UF (~ 400 agents) with Distributed Architechture  (UF --> HF --> Indexer) as below: 1. When a new UF agent connect to Deployment Server, i can't know it. 2. How to prevent local users from uninstalling UF agent on Client host ?  3. Monitoring, alert status UP/DOWN of UF agent.  4. Agent have been deployed Splunk apps or not ?  Please give me some solutions in my cases. Thanks for your concerns !
  I am unable to receive data from the forwarder to the server However I have added the server on server I got netstat -auntp | grep 9997 tcp 0 0 0.0.0.0:9997 0.0.0.0:* LISTEN tcp 0 0 myserver:9... See more...
  I am unable to receive data from the forwarder to the server However I have added the server on server I got netstat -auntp | grep 9997 tcp 0 0 0.0.0.0:9997 0.0.0.0:* LISTEN tcp 0 0 myserver:9997 ServerIP:60992 ESTABLISHED  
Hi Team,   Can someone provide me the Regex for the below:   |search (UPN=*T@mail.eeir)
Hello there   So, I've extracted from the log, using rex, the time, called OSY_time and each individual slow query, called Query. I want to get from here a graphs that shows the top 20 queries,... See more...
Hello there   So, I've extracted from the log, using rex, the time, called OSY_time and each individual slow query, called Query. I want to get from here a graphs that shows the top 20 queries, for average time, in a specified time range. | eval seconds = tonumber(trim(OSY_timing)) | streamstats avg(seconds) as sec_avg by Query |sort -sec_avg | top 20 sec_avg What I want to get is in x axis the query, and in Y the avg_time. How can I do that? Thanks for any reply
Hey all! I am tasked to do some housekeeping and find out which installed apps are being used the least so that I can uninstall them. Is there a search string I can use to list down all the apps ... See more...
Hey all! I am tasked to do some housekeeping and find out which installed apps are being used the least so that I can uninstall them. Is there a search string I can use to list down all the apps to see which app is being used often or being used the least?  Best Regards, jawk339
  I have a query that returns the following result.   Column 1 Column 2 A1 A2 B1 B2 C1 C2 D1 D2   And I would like to transform it to something like thi... See more...
  I have a query that returns the following result.   Column 1 Column 2 A1 A2 B1 B2 C1 C2 D1 D2   And I would like to transform it to something like this 1st  row 1st row 2nd row  2nd row 3 rd row  3rd row 4th row 4th row A1 A2 B1 B2  C1  C2  D1  D2       | makeresults | eval _raw="Column1,Column2 A1,A2 B1,B2 C1,C2 D1,D2" | multikv forceheader=1 | table Column1,Column2 |.......      
Hi folks I'm providing a sample of many values I have for field: username Field: username Value:  Roger Smith Bob Dole Randy Savage I'm trying to create another field with the EVAL command... See more...
Hi folks I'm providing a sample of many values I have for field: username Field: username Value:  Roger Smith Bob Dole Randy Savage I'm trying to create another field with the EVAL command called EMAIL and placing a dot between first name and last name followed by @Anonymous.com Basically I'm trying to get the new field like this. Field: Email Roger.Smith@falcon.com Bob.Dole@falcon.com Randy.Savage@falcon.com What would the syntax be?   Thanks in advance
i have a index which has 3 inputs for security/application/system, since there is a need for application log for another app for same host , i want to exclude it from other one. how can we achieve th... See more...
i have a index which has 3 inputs for security/application/system, since there is a need for application log for another app for same host , i want to exclude it from other one. how can we achieve this.
Hi Everyone, I needed the search query for the below 2 points  1)how many alarms that are more than 90 days old are still open and how many of them are closed 2)how many of those are triggered fro... See more...
Hi Everyone, I needed the search query for the below 2 points  1)how many alarms that are more than 90 days old are still open and how many of them are closed 2)how many of those are triggered from last one month and still open and how how many are closed  alarms in incident review dashboard  Thanks
How to get license key for Trial version of Splunk Enterprise? any one please help.
Hi All, So I'm trying to come up with a solution where all UFs and HFs add new fields to all indexed data for env_class = the type of server the logs are from ie mailserver, app_server, webserver ... See more...
Hi All, So I'm trying to come up with a solution where all UFs and HFs add new fields to all indexed data for env_class = the type of server the logs are from ie mailserver, app_server, webserver env_type = dev, test or prod. I can do this with an inputs.conf on the forwarder that looks like this     # Ref: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf # Add default indexed time field for this type of host [default] # These fields will be added to all events coming from this host. See README.TXT or fields.conf for how to make these searchable from search head. All varaibles must be on the same line seperated by space # Note if this config is on an intermediate Heavy Forwarder they will also be applied to all events passing through that forwarder (even cooked data) _meta=env_class::workstation env_type::prod [WinEventLog] _meta=env_class::workstation env_type::prod [perfmon] _meta=env_class::workstation env_type::prod [WinHostMon] _meta=env_class::workstation env_type::prod # Untested but might be required. # [WinRegMon] # _meta=env_class::workstation env_type::prod       All the hosts are dynamically created and destroyed with random hostnames hence the need for these additional fields to be added all events coming from each host. So for dashboards monitoring say perfmon the end user can quickly drill down to all the prod webservers. Now all these additional indexed fields must be all contained in the one _meta line in a config. Which brings me to my dilemma. I'd like some control over this from the deployment server with say the following server classes with associated apps. Dev Environment App - Sets the env_type=dev for all hosts with *-dev-* in the hostname Prod Environment App - Sets the env_type=prod for all hosts with *-prod-* in the hostname Webserver Class App - Sets the env_class=webserver  Mailserver Class App - Sets the env_class=mailserver etc.... The problem is that the Environment and Class app will override the setting of _meta and only one will get used in the final setup. ie cfg_set_env_type_prod/local/inputs.conf   [default] _meta=env_type::prod     cfg_set_env_class_webserver/local/inputs.conf   [default] _meta=env_class::webserver   Because each app uses _meta only the cfg_set_env_class_webserver will apply the _meta since it's wins the precedence war with it's app name. So only env_class will be set and env_type will be empty.  Are there any solutions that anyone can think of? Since these are UFs we can't use transforms.conf.  
I am working on time series data and would like to detect these type of trough's in the graphs.   The y axis is network bandwidth and minimum value is 0. I'm applying the base query time series to a... See more...
I am working on time series data and would like to detect these type of trough's in the graphs.   The y axis is network bandwidth and minimum value is 0. I'm applying the base query time series to a DensityProbability model then with the following SPL for the Outlier chart:     | eval leftRange=mvindex(BoundaryRanges,0) | eval rightRange=mvindex(BoundaryRanges,1) | rex field=leftRange "Infinity:(?<lowerBound>[^:]*):" | rex field=rightRange "(?<upperBound>[^:]*):Infinity" | fields _time, 1/1/g1, lowerBound, upperBound, "IsOutlier(1/1/g1)", *     What approach can I take to detect the significant dip in the graph?
Hi All, I am getting error during I tried to create new ms-sql connection on my Splunk server.   Connection Type: MS-SQL Server Using MS Generic Driver With Windows Authentication JDBC URL : ... See more...
Hi All, I am getting error during I tried to create new ms-sql connection on my Splunk server.   Connection Type: MS-SQL Server Using MS Generic Driver With Windows Authentication JDBC URL : jdbc:sqlserver://X.X.X.X:1433;databaseName=master;selectMethod=cursor;integratedSecurity=true ERROR : This driver is not configured for integrated authentication. ClientConnectionId:8cf05a84-6449-4ca7-b714-eb95d52480f7 Java version on my splunk server : jre1.8.0_281 Splunk DB Connect version: DBX:3.5.1 Splunk Server OS: CentOS 7   Drivers in my machine :    
What is the search for creating account on MAC OS?
I have a large NodeRED JSON flows.json file that I'm ingesting into Splunk. In that file there are one or more 'flows', which are made up of a sequence of 'nodes'. Each NodeRED 'node' is a JSON snip... See more...
I have a large NodeRED JSON flows.json file that I'm ingesting into Splunk. In that file there are one or more 'flows', which are made up of a sequence of 'nodes'. Each NodeRED 'node' is a JSON snippet and I have configured Splunk to ingest these as separate events. In basic form, they look like this   { "id": "2e88d163.b8d20e", "type": "evaluator", "z": "430b6531.d34c7c", "name": "", "x": 870, "y": 300, "wires": [ [ "c53c6260.e6a06" ] ] }   where x/y/z are UI related attributes, but ID, type and wires are key to the flow sequence. A node can be connected to any number of other nodes via the 'wires', where the id references the id of another node. As a bit of an exercise I started to wonder if it was possible to 'transaction' all the nodes involved in a single flow so that all the node objects could then be visualised either in a simple table or a sequence diagram.  The challenge seems to be that there is no common attribute to join all the nodes together. There can be any number of wires in the array, indicating different paths in the flow and the flow can have as many nodes as it likes. In my case, it always starts with a particular 'type' and ends with another 'type', so I know when the flow starts and ends. I did think of putting all this data to a lookup, but I still am not sure if it's possible to collect all nodes in a flow as it seems as though I would need to have an unknown number of passes through the data to fill in the wire connections. Can anyone think how this could be done?  
I'm monitoring a Windows drive for any files ending in *.lrr and *.eve. This is because we have no control over where the files will be created. This may not be efficient but it works I want to blac... See more...
I'm monitoring a Windows drive for any files ending in *.lrr and *.eve. This is because we have no control over where the files will be created. This may not be efficient but it works I want to blacklist a folder on the drive and any sub folders so that the above files are not monitoring if they are in the blacklisted folder. The inputs.conf is [monitor://D:\...\*.lrr] disabled = false whitelist = index = au_cpe_common_app sourcetype=LoadRunner_LRR crcSalt = <SOURCE> initCrcLength=1000 [monitor://D:\...\_t_rep.eve] disabled = false whitelist = index = au_cpe_common_app sourcetype=LoadRunner_EVE crcSalt = <SOURCE> [monitor://D:\DoNotMonitor\] disabled = false whitelist = blacklist = .+ recursive = true I would have expected the above blacklist to not monitor any files in the D:\DoNotMonitor folder recusively but it is ingesting files with source "D:\\DoNotMonitor\\donotmonitortest29042021\\_t_rep.eve" What is the correct way to specify this? I can't find a well documented example of this specific use case
I inherited a Splunk env and I noticed on the Heavy Forwarder- "Forwarding and receiving" page that in addition to some indexers, this HF's own server is also configured for forwarding. Why previous... See more...
I inherited a Splunk env and I noticed on the Heavy Forwarder- "Forwarding and receiving" page that in addition to some indexers, this HF's own server is also configured for forwarding. Why previous Admin would have done this is beyond my understanding but keen to know what possible repercussions it can lead to ?
The data is MFA attempts in O365. I have an alert that fires whenever someone denies an MFA push. The thing is, sometimes someone has just accidentally tapped "deny", and they use MFA successfully in... See more...
The data is MFA attempts in O365. I have an alert that fires whenever someone denies an MFA push. The thing is, sometimes someone has just accidentally tapped "deny", and they use MFA successfully in the next minute or two. Sample data: _time msg event_name 2021-04-28 16:13:49 Single EVENT_CATEGORY_SSO_LOGIN 2021-04-28 16:13:46 Multi-factor EVENT_CATEGORY_FACTOR_AUTH_SUCCESS 2021-04-28 16:13:43 send_factor_verify_push EVENT_CATEGORY_UNSPECIFIED 2021-04-28 16:13:38 user.mfa.okta_verify.deny_push EVENT_CATEGORY_UNSPECIFIED 2021-04-28 16:13:28 send_factor_verify_push EVENT_CATEGORY_UNSPECIFIED 2021-04-28 16:13:26 Log EVENT_CATEGORY_LOGIN 2021-04-28 16:13:26 policy.evaluate_sign_on EVENT_CATEGORY_UNSPECIFIED 2021-04-28 16:13:26 message_sent.new_device_notification EVENT_CATEGORY_UNSPECIFIED What I want is to filter on messages that contain "deny_push", but that are not followed up with a successful authentication within 5 minutes after the deny_push event. How on earth do I do that?
I am working with JSON data type events and am trying to extract the username (user1, user2) from the pathspec data structure in my events (sample below) : "pathspec": {"__type__": "PathSpec", "lo... See more...
I am working with JSON data type events and am trying to extract the username (user1, user2) from the pathspec data structure in my events (sample below) : "pathspec": {"__type__": "PathSpec", "location": "/media/APA_windows/Users/user1/AppData/Local/Microsoft/Windows/UsrClass.dat", "type_indicator": "OS"} "pathspec": {"__type__": "PathSpec", "location": "/media/APA_windows/Users/user2/AppData/Local/Microsoft/Windows/UsrClass.dat", "type_indicator": "OS"} I am using the below SPL to split up pathspec.location into a multi value field and then use mvindex :    ..... | makemv delim="/" pathspec.location | eval user_name = mvindex(pathspec.location, 3)   However when I table out the user_name field it does not show any results. Not sure why this is not working. Any suggestions would be helpful  Desired output from the user_name field would be    user1 user2 . . . . .