All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Need some suggestions related to dynamic sourcetype extraction:  Does splunk supports sourctype extraction from the stanza which i am using in monitor of inputs.conf file.  For ex: /var/log/test-fu... See more...
Need some suggestions related to dynamic sourcetype extraction:  Does splunk supports sourctype extraction from the stanza which i am using in monitor of inputs.conf file.  For ex: /var/log/test-function_name.log In this log i want to extract the function-name and use it as a source type. This also means there will be multiple log files under /var/log folder based on function_name.   The reason why i am using this is because my log events does not include this fuction name is each and evert event . The events are more specifially comming in like: "" Log forwarding initializing for function =test/function-name job_id: XX created ----------------Logs----------------- jo_id: XX killed   So what is the best way to extract the function_name.
I am looking to trigger an alert in splunk if a new error is there in server logs. New error is an error/s that was not present in server logs in the past one week. I have index for logs index=Server... See more...
I am looking to trigger an alert in splunk if a new error is there in server logs. New error is an error/s that was not present in server logs in the past one week. I have index for logs index=Serverlogs1. Please help!
Working with the Python SDK, and my end goal is to fetch logs over a given time. For now I'm trying to output saved searches and then later will move on to the logs. Referencing the docs, this is... See more...
Working with the Python SDK, and my end goal is to fetch logs over a given time. For now I'm trying to output saved searches and then later will move on to the logs. Referencing the docs, this is close to what I want to do, minus the delete portion https://docs.splunk.com/DocumentationStatic/PythonSDK/1.6.5/client.html?highlight=saved%20searches#splunklib.client.SavedSearches Something like for saved_search in saved_searches.iter(pagesize=10): print(saved_search) but not getting any output, any ideas on where to go? For clarity using the oneshot method, and want to output saved search results. 
Hi Team, I am having a logging with double pipe separator (||)  and need to get the key values from logs.  Log pattern:- logs ........|ab-c=1234||xy-z=1598||cd-e=5ab4||....more logs Need to fetch... See more...
Hi Team, I am having a logging with double pipe separator (||)  and need to get the key values from logs.  Log pattern:- logs ........|ab-c=1234||xy-z=1598||cd-e=5ab4||....more logs Need to fetch table to with values of (ab-c,xy-z,cd-e).  Till now i tried  search | dedup ab-c, cd-e,xy-z | table ab-c, xy-z, cd-e   but its not working. Please suggest  
To all: Still learning about REGEX ...  I looked at RUBULAR.COM and REFEX101.com to figure out how to pull out the Users ids...   In the example below need to get 4 User Ids out ...   I matched on s... See more...
To all: Still learning about REGEX ...  I looked at RUBULAR.COM and REFEX101.com to figure out how to pull out the Users ids...   In the example below need to get 4 User Ids out ...   I matched on single quote ' - however not able to get  the 4 ids in one swoop ... any suggestions?  Its just not that easy...  Watched a couple of Youtubes on regex ... just not that all intuitive when condition are a little bit more tricky. appreciate any help I can get.   Multiple 'access denied' events detected with protocol smb (at least 41 failed attempts in 15 seconds). Last usernames used in login requests are: 'NA\HXXX6LBDBMCXT2$', 'NA\RKXXXEDE', 'UPSTREAM\dXXXcline', 'ULAB\l3xxxxcli'. Last path trying to access FA Labs\XXXM\Lumisizer\08-xx-2020_1201 A-D, 1201 A2-D2 - Copy\1201 C.xlsx
Need help with Splunk query to identify an anomaly for increase in frequency of errors in logs. Historic data to compare with old errors frequency ,  can be of past one week.
I appologize if this has been asked and answered.  I tried searching the forum but couldn't find the answer (if might have been that I don't know what to search for). We are logging VPN logins and... See more...
I appologize if this has been asked and answered.  I tried searching the forum but couldn't find the answer (if might have been that I don't know what to search for). We are logging VPN logins and I have a requirement to track the client version overtime as we upgrade it.  I have a log message that has both user and version and am trying to plot a daily chart that shows how the number of users who's last login was with each version of software.  So far I have: | stats first(Version) AS version by User Which looks like it gives me a table of the last version that each user logged in with but first of all, it doesn't seem super efficient. I am also lost on how to: - Turn it into count of the number of entries for each version - Chart this for past values
Hello, Each event represents a user state and every user has rank. data look as follow : time rank user time1 30 2 time1 50 1 time2 25 2 time2 51... See more...
Hello, Each event represents a user state and every user has rank. data look as follow : time rank user time1 30 2 time1 50 1 time2 25 2 time2 51 1   Any idea on how to group events by time, and subtract the earliest rank from the latest rank for each user? M
Hi, I have set 35 days of data retention for an index but data is available for 288 days. The daily average licence uses by the index is approx 60 GB. Below is the current setting: frozenTimePerio... See more...
Hi, I have set 35 days of data retention for an index but data is available for 288 days. The daily average licence uses by the index is approx 60 GB. Below is the current setting: frozenTimePeriodInSecs = 3024000 maxDataSize = auto_high_volume maxTotalDataSizeMB = 1500000   How I can modify indexes.conf to maintain the 35 days data retention policy. Thanks.
Hello Guys I have a question regarding data models I have a data that's parsed in splunk using cim compatible add-ons I need to make a dashboard for all changes in the system so I used the change ... See more...
Hello Guys I have a question regarding data models I have a data that's parsed in splunk using cim compatible add-ons I need to make a dashboard for all changes in the system so I used the change data model and made a dashboard panels using this spl | from datamodel "Change.Auditing_Changes" | from datamodel "Change.Endpoint_Changes"  ...etc but its very very slow so I made the Change data model accelerated with summary range 1 year but nothing changed it still slow could anyone help me with this
Hello Guys I have a question regarding data models I have a data that's parsed in splunk using cim compatible add-ons I need to make a dashboard for all changes in the system so I used the change ... See more...
Hello Guys I have a question regarding data models I have a data that's parsed in splunk using cim compatible add-ons I need to make a dashboard for all changes in the system so I used the change data model and made a dashboard panels using this spl | from datamodel "Change.Auditing_Changes" | from datamodel "Change.Endpoint_Changes"  ...etc but its very very slow so I made the Change data model accelerated with summary range 1 year but nothing changed it still slow could anyone help me with this
Hi, I have a transaction that goes through multiple Status before its completed. Now the challenge I am facing here is , one status can be mapped multiple time before the transaction is completed a... See more...
Hi, I have a transaction that goes through multiple Status before its completed. Now the challenge I am facing here is , one status can be mapped multiple time before the transaction is completed and in some cases the same status can keep repeated until its move to the next state - For example ,my logs look something like below for one transaction ID - (Note-there can be many more statuses other than the one's below) 2020-08-27 08:00:40.000, ID="20",  STATUS="CREATE" 2020-08-27 08:01:11.000, ID="20",  STATUS="POST" 2020-08-27 08:01:42.000, ID="20",  STATUS="POST" 2020-08-27 08:02:24.000, ID="20",  STATUS="POST" 2020-08-27 08:03:46.000, ID="20",  STATUS="REPAIR" 2020-08-27 08:03:56.000, ID="20",  STATUS="PENDING" 2020-08-27 08:04:00.000, ID="20",  STATUS="UPDATE" 2020-08-27 08:04:12.000, ID="20",  STATUS="UPDATE" 2020-08-27 08:04:30.000, ID="20",  STATUS="POST" 2020-08-27 08:04:46.000, ID="20",  STATUS="COMPLETE" 2020-08-27 08:04:56.000, ID="20",  STATUS="COMPLETE" Now , What i want to do is calculate the total duration of time a transaction spent in a particular status. So the final results should look something like below -  ID STATUS max(_time) duration (sec) 20 CREATE 2020-08-27 08:00:40.487 31 20 POST 2020-08-27 08:02:24.265 155 20 REPAIR 2020-08-27 08:03:46.529 10 20 PENDING 2020-08-27 08:03:56.097 4 20 UPDATE 2020-08-27 08:04:12.715 30 20 POST 2020-08-27 08:04:30.366 16 20 COMPLETE 2020-08-27 08:04:56.517     As of now, with below query I am able to map the status according to time but the duration is not being calculated accurately. Can someone please help me figure this out. my search ... | sort 0 _time | streamstats current=false last(STATUS) as newstatus by ID | reverse | streamstats current=false last(_time) as next_time by ID | eval duration=next_time-_time | reverse | streamstats count(eval(STATUS!=newstatus)) as order BY ID | stats max(_time) as _time, sum(duration) as "duration(sec)" BY ID order STATUS Thanks in advance.  
I've got tons and tons of logs. What I want is login durations from the wineventlogs by usernames. Each event has the EventID and the username that caused it. Lets say the username is "jbob" So Ev... See more...
I've got tons and tons of logs. What I want is login durations from the wineventlogs by usernames. Each event has the EventID and the username that caused it. Lets say the username is "jbob" So EventID=4624 is a login EventID=4634 (disconnect/timeout) OR EventID=4647 (actual logoff). How can I get the time from login id to one of the two logoff ids. For each login throughout the search window? They could log in and out 50 times in a day for example.  
I have installed and configured TA-connectivity for port check. It works fine when access to port is open and detects anything is listening on the port or not. The problem arises when access to the p... See more...
I have installed and configured TA-connectivity for port check. It works fine when access to port is open and detects anything is listening on the port or not. The problem arises when access to the port is blocked by the firewall. In this case nothing happens and the execution of the script stops. For example, if I have three entries in the configuration file that define three hosts and their ports and the second entry's port is blocked then TA-connect will correctly determine port open/closed for the first entry but produce nothing for the second and even the third entry. Can timeout be introduced in the this check and reported as such? Or at least have the execution fail for the second entry in the example above but still work for the third entry. Does anyone know if this app is supported by the developer or if there any alternatives?
Hi,  My CSV(test_csv_lookup) looks like this:  --- index; value 1, 1.1.1.1 ---- here is my automatic lookup  LOOKUP-field_extract = test_csv_lookup index AS ip OUTPUTNEW value AS lookedup_val ... See more...
Hi,  My CSV(test_csv_lookup) looks like this:  --- index; value 1, 1.1.1.1 ---- here is my automatic lookup  LOOKUP-field_extract = test_csv_lookup index AS ip OUTPUTNEW value AS lookedup_val   I have two following events in the index for which I will apply the the above automatic lookup:  event1 -  timestmap, 1 event2 - timestmap, 2.2.2.2.    In above event, the "ip" field values are "1", and "2.2.2.2", in the first event, "1" being the value of the "ip" ,  just refers to the index value of the lookup table and second event just contain raw value, and doesn't need lookup.  When I query for the index, the lookedup_val  shows the "1.1.1.1". What I need is that, both the values(in the fieldlookedup_val) , that is "1.1.1.1" and "2.2.2.2".  For the first event, its working fine, by looking up the index and able to retrieve, for the second event also it's doing the lookup and obviously it can't find.  When lookup can't find the value, could it' use the raw value or default field value, in this case, 2.2.2.2? Is there a way I can specify this in automatic lookup output? Thanks,        
I am having a blast with the new dashboards app... Big improvement in how refined one can make a dashboard... Just really cool stuff!  I would like to create a dashboard that has a left vertical fr... See more...
I am having a blast with the new dashboards app... Big improvement in how refined one can make a dashboard... Just really cool stuff!  I would like to create a dashboard that has a left vertical frame (which I can already do), but I would like to put inputs like time picker in that frame. I have to imagine this is not that hard?!?!? Any help or pointers to docs that will help is much appreciated! Thanks.    
Splunk hardware recommendation is as follows: Normal Instance - SH and IDXs - 12 Core/16 GB Enterprise Security - SH and IDX - 16 Core/32 GB But, when we see resource utilization is quite low. L... See more...
Splunk hardware recommendation is as follows: Normal Instance - SH and IDXs - 12 Core/16 GB Enterprise Security - SH and IDX - 16 Core/32 GB But, when we see resource utilization is quite low. Like CPU and Memory utilization is quite less around 30-50%. Can we make Splunk to really use the resources? How do you suggest to use the below parameters in limits.conf file? max_mem_usage_mb - Provides a limitation to the amount of RAM, in megabytes (MB), a batch of events or results will use in the memory of a search process. base_max_searches - A constant to add to the maximum number of searches, computed as a multiplier of the CPUs. max_searches_per_cpu - The maximum number of concurrent historical searches for each CPU. Any recommendations or suggestions around this?
    Ghj sourcetype=access_combined | eval action = if(isnull(action) OR action="", "Unknown", action) | timechart span=40h values(action),count(action)  
I'd like to replay a log, simulating prod, and continuously generating events (every 30 seconds is fine). I'm all good with sample mode, but it looks like I can only have random timestamps between ... See more...
I'd like to replay a log, simulating prod, and continuously generating events (every 30 seconds is fine). I'm all good with sample mode, but it looks like I can only have random timestamps between earliest/latest. As with this code, giving my 28k events generated again and again every 30s with new timestamps from -1w till now. Problem is I'd like to keep the sequence of events. Can I have sample mode not scramble the timestamps ?   [myfile.sample] mode = sample outputMode = file fileName =/opt/log/mynew.log interval = 30 earliest=-1w latest=now token.0.token = \d{2}/\w{3}/\d{4}:\d{2}:\d{2}:\d{2} token.0.replacementType = timestamp token.0.replacement = %d/%b/%Y:%T   Instead, I am trying replay mode. But I can't get the outputfile generated. Nothing. What is wrong with my replay mode?   [myfile.sample] mode = replay outputMode = file fileName =/opt/log/mynew.log count = 0 interval = 30 earliest=now latest=now token.0.token = \d{2}/\w{3}/\d{4}:\d{2}:\d{2}:\d{2} token.0.replacementType = replaytimestamp token.0.replacement = %d/%b/%Y:%T   Thanks for your help.
Hello I have an IIS server with one site and several applications. Appdynamics .Net agent 20.4.1 each application has a appName.svc web page that I can call to check if the service is up. I tried ... See more...
Hello I have an IIS server with one site and several applications. Appdynamics .Net agent 20.4.1 each application has a appName.svc web page that I can call to check if the service is up. I tried AppDynamics Extension for URL Monitoring and followed the installation instructions. I can see in 'Metric browser' the URL monitor section, under that I see 'Metric Uploaded'. where do I see indication that a URl is down/up? can I monitor multiple URLs, as i did in yml file? my config.yml file section looks like this: sites: #No authentication, with a pattern to match - name: ReportService.svc url: https://serverName/Reports/ReportService.svc followRedirects: false groupName: MySites - name: DigitalService.svc url: http://serverName/Digital/DigitalService.svc followRedirects: false groupName: MySites - name: EmailService.svc url: http://serverName/Email/EmailService.svc followRedirects: false groupName: MySites log: [Monitor-Task-Thread1] 29 Aug 2020 11:04:05,831 ERROR URLMonitorTask-URL Monitor - Unexpected error while running the URL Monitor com.singularity.ee.agent.systemagent.api.exception.TaskExecutionException: java.lang.NullPointerException at com.appdynamics.extensions.urlmonitor.config.RequestConfig.setClientForSite(RequestConfig.java:71) ~[?:?] at com.appdynamics.extensions.urlmonitor.URLMonitorTask.run(URLMonitorTask.java:79) [?:?] at com.appdynamics.extensions.TasksExecutionServiceProvider$1.run(TasksExecutionServiceProvider.java:48) [?:?] at com.appdynamics.extensions.executorservice.MonitorThreadPoolExecutor$TaskRunnable.run(MonitorThreadPoolExecutor.java:113) [?:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_241] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_241] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_241] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_241] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_241] Caused by: java.lang.NullPointerException at com.appdynamics.extensions.urlmonitor.config.RequestConfig.setClientForSite(RequestConfig.java:55) ~[?:?] ... 8 more [Monitor-Task-Thread1] 29 Aug 2020 11:04:05,831 INFO URLMonitorTask-URL Monitor - All tasks for URL Monitor finished [Monitor-Task-Thread1] 29 Aug 2020 11:04:05,831 INFO MetricWriteHelper-URL Monitor - Finished executing URL Monitor at 2020-08-29 11:04:05 IDT [Monitor-Task-Thread1] 29 Aug 2020 11:04:05,831 INFO MetricWriteHelper-URL Monitor - Total time taken to execute URL Monitor : 0 ms [Monitor-Task-Thread1] 29 Aug 2020 11:04:05,831 INFO ABaseMonitor - Finished processing all tasks in the job for URL Monitor [pool-10-thread-2] 29 Aug 2020 11:04:09,628 INFO MetricLimitCheck-URL Monitor - Starting MetricLimitCheck [pool-10-thread-2] 29 Aug 2020 11:04:09,628 INFO PathResolver-URL Monitor - Install dir resolved to C:\Program Files\AppDynamics\machineagent [pool-10-thread-1] 29 Aug 2020 11:04:09,628 INFO MachineAgentAvailabilityCheck-URL Monitor - Starting MachineAgentAvailabilityCheck [pool-10-thread-1] 29 Aug 2020 11:04:09,628 INFO MachineAgentAvailabilityCheck-URL Monitor - SIM is enabled, not checking MachineAgent availability metric [pool-10-thread-2] 29 Aug 2020 11:04:11,175 INFO MetricLimitCheck-URL Monitor - MetricLimitCheck took 1547 ms to complete [pool-10-thread-2] 29 Aug 2020 11:04:29,629 INFO MetricLimitCheck-URL Monitor - Starting MetricLimitCheck [pool-10-thread-2] 29 Aug 2020 11:04:29,629 INFO PathResolver-URL Monitor - Install dir resolved to C:\Program Files\AppDynamics\machineagent [pool-10-thread-2] 29 Aug 2020 11:04:31,332 INFO MetricLimitCheck-URL Monitor - MetricLimitCheck took 1703 ms to complete [pool-10-thread-1] 29 Aug 2020 11:04:49,629 INFO MetricLimitCheck-URL Monitor - Starting MetricLimitCheck [pool-10-thread-1] 29 Aug 2020 11:04:49,629 INFO PathResolver-URL Monitor - Install dir resolved to C:\Program Files\AppDynamics\machineagent [pool-10-thread-1] 29 Aug 2020 11:04:51,254 INFO MetricLimitCheck-URL Monitor - MetricLimitCheck took 1625 ms to complete