All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I was curious about Splunk access for students. As a Business Analytics student, Splunk would be very useful. My college does not currently use Splunk, but I wanted to know if students could ... See more...
Hello, I was curious about Splunk access for students. As a Business Analytics student, Splunk would be very useful. My college does not currently use Splunk, but I wanted to know if students could access Splunk learning courses at a student rate, I could also get my professor to recommend it as well if needed.    
Hello, Good day to you. We are experiencing an issue wherein, our Splunk instance when accessed outside its host (windows server) it logs in but immediately kicks us out regardless of the user accoun... See more...
Hello, Good day to you. We are experiencing an issue wherein, our Splunk instance when accessed outside its host (windows server) it logs in but immediately kicks us out regardless of the user account and receive this error message on the login screen "Your session has expired. Log in to return to the system". Additional info we are using a proxy server to pass traffic destined for Splunk server IP ADDRESS2 to Splunk server IP ADDRESS1. Please advise. Thank you.
I have Splunk Cloud and on here I have the Splunk App for Windows Infrastructure installed. I also have the Splunk Supporting Add-on for Active Directory installed (which I was told was needed) on Sp... See more...
I have Splunk Cloud and on here I have the Splunk App for Windows Infrastructure installed. I also have the Splunk Supporting Add-on for Active Directory installed (which I was told was needed) on Splunk Cloud. However, I'm not so sure this is correct because the configuration of this supporting Add-on looks very much like it needs to be within my local network. In my local network, I have a domain controller with the Splunk Add-on for Microsoft Windows installed and this is sending data to my Splunk Cloud indexes. However, some of my dashboards display errors like this:  [subsearch]: External search command 'ldapsearch' returned error code 1. Script output = "error_message=Cannot find the configuration stanza for domain=MYDOMAIN in ldap.conf. " I've been reading through the docs again and it seems like I need to have LDAP searches configured and working which appear to be part of the Supporting Add-on for Active Directory. However, another post I read said that the Splunk Add-on for Microsoft Windows removes the need for this supporting add-on.  I'm wholly confused at the moment. Can someone clear this up for me? I just want to get all data working correctly on the Splunk App for Windows Infrastructure hosted in my Splunk Cloud environment. Documentation just feels like an utter minefield.  Am I missing an app on my local server or have I missed a piece of key config on the Splunk Add-on for Microsoft Windows App?
Hi all, From one client is required that users, with a role similar to power, have to be able to upload files using the GUI. With the following capabilities: accelerate_search change_own_passwor... See more...
Hi all, From one client is required that users, with a role similar to power, have to be able to upload files using the GUI. With the following capabilities: accelerate_search change_own_password delete_messages edit_monitor edit_search_schedule_window edit_sourcetypes edit_statsd_transforms edit_tcp embed_report export_results_is_visible get_metadata get_typeahead indexes_edit input_file list_accelerate_search list_inputs list_metrics_catalog list_tokens_own metric_alerts output_file pattern_detect request_remote_tok rest_apps_view rest_properties_get rest_properties_set rtsearch run_collect run_mcollect schedule_rtsearch schedule_search search search_process_config_refresh upload_lookup_files Users are able only to reach this point:   I tried to find in the docs what is the missing capability in here, but I cannot find it. Can someone give a hand on this? Many thanks in advance.  Best regards.
  In some of the events, I have '\n' in the events : message: org.springframework.jdbc.UncategorizedSQLException: CallableStatementCallback; uncategorized SQLException for SQL <{call XYZ_API.PROCES... See more...
  In some of the events, I have '\n' in the events : message: org.springframework.jdbc.UncategorizedSQLException: CallableStatementCallback; uncategorized SQLException for SQL <{call XYZ_API.PROCESS_EVENT(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)}>; SQL state <16000>; error code <20011>; ORA-00060: deadlock detected while waiting for resource\nORA-06512: at "ABC_OWNER.XYZ_API", line 5133\nORA-06512: at "ABC_OWNER.XYZ_API", line 2001\nORA-06512: at "ABC_OWNER.XYZ_API", line 6829\nORA-06512: at line 1\n; nested exception is java.sql.SQLException: ORA-00060: deadlock detected while waiting for resource\nORA-06512: at "ABC_OWNER.XYZ_API", line 5133\nORA-06512: at "ABC_OWNER.XYZ_API", line 2001\nORA-06512: at "ABC_OWNER.XYZ_API", line 6829\nORA-06512: at line 1\n'   Although my regex (message:\s(?<METADATA_ERROR>[^\\\n]+))  to extract until the first '\n'  appears in the event works : https://regex101.com/r/XwEg29/1 When I try on Splunk, it extracts only   'org.spri'  Do we need to handle \n differently in Splunk ?
Hi, How to plot cluster agent metrics in dashboard because there is no option to plot the metrics in any of the widgets? In any widget if I select the server under which cluster agent is shown it wi... See more...
Hi, How to plot cluster agent metrics in dashboard because there is no option to plot the metrics in any of the widgets? In any widget if I select the server under which cluster agent is shown it will provide the metrics of server visibility instead of cluster agent. Regards, Gopikrishnan
Hi, Anyone know why I am getting this error when I install Splunk App for Infrastructure.  Splunk Support is not coming up with any good solutions with this error.  Need the community help.  My lice... See more...
Hi, Anyone know why I am getting this error when I install Splunk App for Infrastructure.  Splunk Support is not coming up with any good solutions with this error.  Need the community help.  My license is up-to-date.   Thank you,  
Currently, I have created a dashboard with panels that have input selection. One specific panel displays application data listed by app, and highest to lowest based on the amount of bytes. Current... See more...
Currently, I have created a dashboard with panels that have input selection. One specific panel displays application data listed by app, and highest to lowest based on the amount of bytes. Currently, all of that is pulling fine. The issue is with what I am trying to accomplish next. I am trying to add a dropdown input selection that would allow you to choose to display the apps based on rank. For example: Say I have apps listed by most amount of bytes sent, from highest to lowest, lets say 1-20. 1 being highest and 20 being lowest. I want to then have an input selection that allows me to choose to only display results 1-5, 6-10, 11-15, 16-20, etc. It will only be in groupings of 5 like that. I have posted my current query below. Please let me know if anymore details are needed for this. Also, the output is a column chart with the x-axis being "app" and the y-axis being "Total_Bytes". index=your_index NOT app=not-applicable | stats sum(bytes) AS "Total_Bytes" by app | sort 5 -Total_Bytes | eval Total_Bytes=round(Total_Bytes/$AppBytes$,2) Basically, if I select the option from the dropdown to see the 6th-10th app, the query would be running, and ordering these apps from highest to lowest based on bytes while only displaying to me the 6th-10th highest apps from those results. Oh and just an FYI on the query above. The "sort 5 -Total_Bytes" was only to eliminate alot of the noise from the results while testing this. Its not essential that it stay part of the query (unless sort is used in achieving the above). And then the eval piece relates to a dropdown that allows you to convert the bytes to MB, GB, etc. Hope my description of what I am trying to achieve was clear enough. It seems like something that should be rather simple to do but I havent been able to wrap my head around it yet. Thanks in advance for any input!
hello guys here is my result of search. the result is divided by time ( second) and i want to change this format from second to minute ? how should i act ? 
Hi, In dashboards we have lookups which is slow so need an alternative approach like summary index or KV store The lookup volume is sooo high We tried to go with summary index which uses subsearch... See more...
Hi, In dashboards we have lookups which is slow so need an alternative approach like summary index or KV store The lookup volume is sooo high We tried to go with summary index which uses subsearches, but there is a limit, were subsearches > 50K will be skipped so we were not able to go with summary index. Is there any other possible ways?
Hi All, I have a table similar to the following id time 1 2021-03-03 13:52:53.158 1 2021-03-03 13:52:53.001 1 2021-03-03 13:52:52.987 2 2021-03-03 11:59:50.950 2 2021-03-03 ... See more...
Hi All, I have a table similar to the following id time 1 2021-03-03 13:52:53.158 1 2021-03-03 13:52:53.001 1 2021-03-03 13:52:52.987 2 2021-03-03 11:59:50.950 2 2021-03-03 11:59:50.655 2 2021-03-03 11:59:50.631   the result I would like to achieve is   id time1 time2 time3 1 2021-03-03 13:52:53.158 2021-03-03 13:52:53.001 2021-03-03 13:52:52.987 2 2021-03-03 11:59:50.950 2021-03-03 11:59:50.655 2021-03-03 11:59:50.631   it's possible? Thank you for any help Simone
Hello Splunkers, I've got a problem with data splitting. I would like to split data into separate lines. Please take a look at my data:      10.62.19.11 - - [03/Mar/2021:12:49:02 +0100] "POST /a... See more...
Hello Splunkers, I've got a problem with data splitting. I would like to split data into separate lines. Please take a look at my data:      10.62.19.11 - - [03/Mar/2021:12:49:02 +0100] "POST /api/setModernServices HTTP/1.1" 200 1315 0.154 0.148 "http://10.69.10.170/radio/web/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.190 Safari/537.36" "{header:{id:0},data:{entries:[{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.33,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.58,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.3,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.39,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.7,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.39,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.18,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.39,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.8,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.39,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.21,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.58,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.30,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.57,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.20,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.39,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.29,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.58,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.41,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.58,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.23,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.58,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.42,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.58,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.43,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.58,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.24,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.58,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.44,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.58,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.39,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.58,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true}},{scheduleInfo:{type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition\x     I want to have data split like below:     10.62.19.11 - - [03/Mar/2021:12:49:02 +0100] type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.33,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.58,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true 10.62.19.11 - - [03/Mar/2021:12:49:02 +0100] type:once,startTimestamp:1614772142364,endTimestamp:8999999999000},serviceDefinition:{descriptor:{label:,desc:},dests:[{destId:virtual.46.0.3,destName:}],groupOrder:1,profileId:7637a321-8628-47d3-92e8-5fd85b54aa6b,skipReuseJoinId:true,routing:p2mp,srcId:virtual.96.0.39,tags:[LastOnAirId:null,EmerOnAirId:null],type:connection,force:true       I suppose I need to use mvexpand in combination with static regex,  but I cannot find correct formula. Could you help please. 
Running on Splunk 8.08 Even though we already have ITSI setup to recognise dozens of different performance related data sources, IT Essentials Learn shows "Use Case Families were not found." on its ... See more...
Running on Splunk 8.08 Even though we already have ITSI setup to recognise dozens of different performance related data sources, IT Essentials Learn shows "Use Case Families were not found." on its homepage and doesn't offer any macro configuration  Both of the only 2 scheduled searches the App offers are erroring out "Telemetry: ITE Procedure Stats" is erroring with "Error in 'eval' command: Type checking failed. 'OR' only takes boolean arguments." on the below code snippet | eval [| rest /servicesNS/nobody/splunk_instrumentation/telemetry | table telemetrySalt | format | rex field=search mode=sed "s/[()]//g"] | inputlookup ite_procedure Likewise for "Telemetry: ITE Procedure Usages" is erroring out on | eval [| rest /servicesNS/nobody/splunk_instrumentation/telemetry | table telemetrySalt | format | rex field=search mode=sed "s/[()]//g"]
We want to validate CIM mapping that we performed...but I am unable to find "ENDPOINT" & "DATA LOSS PREVENTION" Datasets in following location Setting>Data Model>CIM Validation (S.o.S)
I would like to find a detaild tutorial on how to create a splunk app to parse syslogs, with pre-defined field names, not the automatic key/value that splunk is able to detect. I have syslogs with d... See more...
I would like to find a detaild tutorial on how to create a splunk app to parse syslogs, with pre-defined field names, not the automatic key/value that splunk is able to detect. I have syslogs with different log types, I wonder if there is some documentation/tutorial on this. Can anyone point in the right direction? I am new to splunk. Thanks.
I have this code that shows me the start and end times of runs of a program: index=index1 source=source1 | transaction startswith="Execution started" endswith="Execution ended" | eval start_time=_ti... See more...
I have this code that shows me the start and end times of runs of a program: index=index1 source=source1 | transaction startswith="Execution started" endswith="Execution ended" | eval start_time=_time | eval end_time=_time + duration | table start_time end_time I also have this search which searches for errors (in a different log file): search index=index1 source=source2 severity=ERROR What I need is to add another column in the table that shows the runs of the program that would show how many errors occurred during the run of the program. For example, appendcols like this: <times_search> | appendcols [ search <error_search> | stats count as errors by _time ] But this has two problems: 1) it's by _time when I need it to be by run. I need the amount of all the errors that occurred between start_time and end_time.. 2) it's not sorted like I need it to be. As it says in the documentation, it puts the first result at the first row and the second in the second and so on How do I achieve this? 
I have a TA (modular input) running on Splunk and im trying to read a conf file, for example app.conf. My TA extends the OOTB BaseModInput.   I can do this in python with something like: from splu... See more...
I have a TA (modular input) running on Splunk and im trying to read a conf file, for example app.conf. My TA extends the OOTB BaseModInput.   I can do this in python with something like: from splunk.clilib import cli_common as clicfg = cli.getConfStanza('app', 'launcher') And the data is returned correctly.   However, installing other apps can affect the values returned. When I install the VSCode app, I always get the app.conf values back from VSCode (even without adding the debug import/start code to my own app).   So my question is, is the approach im taking the correct way to read conf files? And if not what is the recommended approach?
  We have Get and Post events in the apache logs. I am trying to exact the average respones time for every URLs that are being accessed by users. Below are the example of log events.  How can I calc... See more...
  We have Get and Post events in the apache logs. I am trying to exact the average respones time for every URLs that are being accessed by users. Below are the example of log events.  How can I calculate the average response time. 10.XXX.X.X [03/Mar/2021:05:42:10 -0600] 84112 "GET /XXXXX/content/actions/amt/tasks/my-tasks?lang=en_US HTTP/1.1" 200 23593 "https://testpage.com/test/*" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.190 Safari/537.36" - 907D339F136C5166E589B1128488A341.app2   10.XXX.X.X [03/Mar/2021:05:42:10 -0600] 84112 "GET /XXXXX/content/actions/amt/tasks/my-tasks?lang=en_US HTTP/1.1" 200 23593 "https://testpage.com/retest/*" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.190 Safari/537.36" - 907D339F136C5166E589B1128488A341.app2
Hi all, I've already been searching for an answer to my question within the community but I didn't find anything exhaustive.  Basically, I noticed different number for license usage if computed usi... See more...
Hi all, I've already been searching for an answer to my question within the community but I didn't find anything exhaustive.  Basically, I noticed different number for license usage if computed using the Rollover Summary (the "official" one) vs the Usage log type. Of course, I know that Rollover Summary logs refer to the previous day. Having in mind this, the issue happens not every day, but sometimes the license usage computed using the "Usage" logs is about 10% greater than the Rollover Summary. I'd like to understand why this happens since I'm experiencing some license violations and I'm trying to investigate which index/sourcetype is the main responsible. But this is not easy since the detailed usage aggregated doesn't correspond to the Rollover Summary. Just to be more clear, if I run this search on the License Master, I get a number for yesterday (say 100 GB):   index=_telemetry source=*license_usage_summary.log | eval _time=_time - 43200, usage_GB = round(b/1024/1024/1024, 3) | timechart span=1d sum(usage_GB) as usage_GB    while if I run this one, split by index, the aggregate gives me a value about 10% higher (110 GB):   index=_internal source=*license_usage.log* type=Usage st!="modular*" | eval gb=b/1024/1024/1024 | timechart span=1d sum(gb) AS idx_volume_GB by idx   The License Master is enabled on a Windows 2016 Server, with Splunk 7.3.6. Does anyone know why this happens and how to overcome this "issue"? Thanks in advance
I have below filename where i need to capture only the bold part.But all these files are in the same location.i cannot use single regex that applies to all the below file format. How do i proceed on... See more...
I have below filename where i need to capture only the bold part.But all these files are in the same location.i cannot use single regex that applies to all the below file format. How do i proceed on capturing the required format? worldtime.xml.1 ztymp.txt.1 molu.dat.1 jss_pyuroly_7.dat.1 zpiyzygh.rtnugbhti.1 AD.CD.MBOUDN.1 DM.DEVT.IYP.IN.1 zpiyrmlu.rage.600.1.txt.1 in_zpiyrmlu.rage.600.1.txt.1 in_soledt.pou.til.ssn.gpg.1.txt.1 zprunsledSCALLb1.prn.1