All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Our dashboard contain Heavy css,HTML and heavy  splunk quries.I want to improve the dashboard performance. 1. Is there other way to reduce loading time other than base search and Span? 2.Is using J... See more...
Our dashboard contain Heavy css,HTML and heavy  splunk quries.I want to improve the dashboard performance. 1. Is there other way to reduce loading time other than base search and Span? 2.Is using Javascript for css really helps to improve the performance?
How can I add new fields and/or rename existing fields to Global Account Settings which currently by default just have username/password inputs ? Something like client id, client secret etc.    ... See more...
How can I add new fields and/or rename existing fields to Global Account Settings which currently by default just have username/password inputs ? Something like client id, client secret etc.     I cannot add password/client secret as data input parameter as they get stored in plain text when add via system user interface(settings->data input)   I cannot make them global parameter either as we need to support multiple environment with each having different set of data. Any help would be appreciated. @splunk 
Please help me with an SPL to locate Corr. searches that are in trouble , not working right. For example missing a macro or so. Thank u very much in advance.
Hello Splunk Community, would you have any advice or recommendations on how to use Trumpet with an organizational CloudTrail? Our organization currently has individual CloudTrails deployed in each a... See more...
Hello Splunk Community, would you have any advice or recommendations on how to use Trumpet with an organizational CloudTrail? Our organization currently has individual CloudTrails deployed in each account but with the introduction of Control Tower this design would become redundant. Any advice on how to best deploy Trumpet with an Organizational CloudTrail is appreciated.
Hi, I have data as below sample: Date Time val1 val2 val3 ...... 21/08/31 01:00:00 2 1 2 2 2 2 2 1 1 2 69 1 0 2 0 0 3 3 21/08/31 02:00:00 1 1 0 1 1 1 0 0 0 0 0 0 0 1 0 1 1 0 21/08/31 03:00:00 2 ... See more...
Hi, I have data as below sample: Date Time val1 val2 val3 ...... 21/08/31 01:00:00 2 1 2 2 2 2 2 1 1 2 69 1 0 2 0 0 3 3 21/08/31 02:00:00 1 1 0 1 1 1 0 0 0 0 0 0 0 1 0 1 1 0 21/08/31 03:00:00 2 1 1 2 2 2 0 1 0 2 1 0 0 2 0 1 2 2 21/08/31 04:00:00 1 1 1 1 1 1 67 0 1 150 205 0 169 312 0 0 2 2 21/08/31 05:00:00 1 0 1 1 1 1 0 0 0 70 1 2 0 1 1 1 2 58 I can calculate the max value for a specific date and time and show as a single value panel on a dashboard. What I'd like to do it find the max value for the latest time reported in the data for a date. index=my_index sourcetype=my:sourcetype Date="21/08/31" Time="03:00:00"| eval max_val = max(val1, val2, val3, val4 ....) |stats max(max_val) as mymax So in the sample where latest Time is "05:00:00" is there a way I can code that rather than hard specify the value? thanks in advance for any thoughts
Hi team,    I am creating a query to fetch a unique id from different events which are having different statuses.  If two log events are having same unique id and with status="START" & status="END"... See more...
Hi team,    I am creating a query to fetch a unique id from different events which are having different statuses.  If two log events are having same unique id and with status="START" & status="END" then that application has completed 1 success iteration or else it should be error.  I created one query can't understand how to compare the 'correlationId' from different events.  Can anyone please help with the query to compare the 'correlationId' from different events along with below query. >>  index="dev" | rex "\"Status\\\\\"\s:\s\\\\\"(?<Status>[^\\\]+)" | stats count by applicationName,Status|where Status in("START","END") Below are the logs for 'Start' & 'End' events.   log: [2021-09-01 04:14:10.216] INFO api [[PythonRuntime].uber.12772: [tyt-autoencoding-dev].get-terms-from-oc/processors/1.ps.BLOCKING @f089563] [event: 80961137-6734-4f7f-8750-3d27cdf2a4eb]: { "correlationId" : "80961137-6734-4f7f-8750-3d27cdf2a4eb", "Status" : "START", "priority" : "INFO", "category" : "com.tayota.api", "elapsed" : 0, "timestamp" : "2021-09-01T04:14:10.215Z", "applicationName" : "Toyato Encoding API", "applicationVersion" : "v1", "environment" : "Development", } log: [2021-09-01 04:14:10.216] INFO api [[PythonRuntime].uber.12772: [tyt-autoencoding-dev].get-terms-from-oc/processors/1.ps.BLOCKING @f089563] [event: 80961137-6734-4f7f-8750-3d27cdf2a4eb]: { "correlationId" : "80961137-6734-4f7f-8750-3d27cdf2a4eb", "Status" : "END", "priority" : "INFO", "category" : "com.tayota.api", "elapsed" : 0, "timestamp" : "2021-09-01T04:14:10.215Z", "applicationName" : "Toyato Encoding API", "applicationVersion" : "v1", "environment" : "Development", } Thanks in advance.
Hi, I tried the below logic, to replace the "no results found "message with a custom message. But after adding append count the,  at end of the query. logic is not working as expected. Can anyone ... See more...
Hi, I tried the below logic, to replace the "no results found "message with a custom message. But after adding append count the,  at end of the query. logic is not working as expected. Can anyone please help? base search | fields version, time  | appendpipe [stats count | where count=0] <done> <condition match=" 'job.resultCount' == 0"> <set token="show_html">true</set> </condition> <condition> <unset token="show_html"/> </condition> <done> <chart rejects="$show_html$"> ... </chart> <html depends="$show_token$"> <div style="font-weight:bold;font-size:150%;text-align:center;color:red"> No data, Please check later </div> </html>  
Hey everyone! I'm in the process of investigating a Splunk instance that I have inherited. I've got a decent handle on things, but I am seeing that the majority of our index is being eaten up by log... See more...
Hey everyone! I'm in the process of investigating a Splunk instance that I have inherited. I've got a decent handle on things, but I am seeing that the majority of our index is being eaten up by logs from our multiple Active Directory controllers. Digging around, I see that the local inputs.conf file for the universal forwarder on the DCs is empty, and btool confirms they are not pulling in config from other places. There is, however, a deploymentclient.conf file, with a single targetUri in it. What's interesting, though, is that the listed TargetUri is not a server name that is present in our environment. It's close, but not exact. Further, I see no signs that this particular domain controller has ever checked in with our deployment server. I know for a fact that we manually installed the Universal Forwarder on the domain controller. I also know that the correct Deployment Server and Indexer were provided at install time. So what might have caused the targetUri to change? I'm thinking it may be something in the deployment server itself, but I don't know where to look for that setting or how the deployment server might have updated it. I'm still getting my head wrapped around just what the deployment server itself is doing, in fact. But I am worried that with a full throttle, out of the box universal forwarder, we are likely collecting way more information than we actually want.  
This posting did not let me share the search string due to it containing HTML code etc. Any advice is appreciated. Thank u 
Hello Splunkers!       I wanted to ask if anyone out there has some SPL that I can use as an alert to detect failed and successful logins detected that are !=United States?  Thank you for your help... See more...
Hello Splunkers!       I wanted to ask if anyone out there has some SPL that I can use as an alert to detect failed and successful logins detected that are !=United States?  Thank you for your help!   
Hi,  I want to know what is the difference between setting props.conf on the search head instead or on the cluster master in a distributed environment. (1 Search head, 1 CM, 2 IDX for example) I ha... See more...
Hi,  I want to know what is the difference between setting props.conf on the search head instead or on the cluster master in a distributed environment. (1 Search head, 1 CM, 2 IDX for example) I have to set an field extraction from custom logs, and so set EXTRACT-fields parameter in a props.conf with the corresponding regex and I was wondering this above question. If someone can help me to understand, it will be nice.  Thanks.  
Hi all, this is my scenario: Splunk Enterprise (8.0.5) installed on Premis on Linux Server SAML Authentication (that works well with company AD) Splunk Native Authentication always Active Sudd... See more...
Hi all, this is my scenario: Splunk Enterprise (8.0.5) installed on Premis on Linux Server SAML Authentication (that works well with company AD) Splunk Native Authentication always Active Suddenly, since last week, the url to login with splunk user (https://myserver/en-US/account/login?loginType=splunk) stopped working. If we try the splunk login url we receive "Bad Request" on browser. Splunk authentication config is always the same. Does anyone had the same problem? Thanks in advance Giorgio
I Have a look up file called dataset.csv which will have one field, dataset_name dataset1 dataset2 dataset3   I need to display the dataset_names in a table which ... See more...
I Have a look up file called dataset.csv which will have one field, dataset_name dataset1 dataset2 dataset3   I need to display the dataset_names in a table which are not present in the search results but present in lookup , i use the below query to retrieve all the datasets . but using lookup it does not show the missing names. QUERY :      index = dataIntegration source=piedpiper lambda_events| search event.proc_stat_cd = "SCSS" AND event.s3_location = "*"s3:*//lake/depositescrow*"*" AND "event.module_name"=EMR | rename event.dataset_id as "Id" |rename event.regrd_dataset_nm as Name |rename event.proc_ts as "DateTime"| table "Id" , "Name","DateTime"       I want the results like,how can i achieve this DatasetName FilesMoved Time dataset1 YES   dataset2 NO  
Hey There, i have n Systems. I would like to apply a rex query, where each fieldname contains the system ID which i have to extract from the logs too. Example:   ... | rex field=_raw "System(\d+... See more...
Hey There, i have n Systems. I would like to apply a rex query, where each fieldname contains the system ID which i have to extract from the logs too. Example:   ... | rex field=_raw "System(\d+)\sFailure\sRate\s(?<system_\1_failurerate>\d*)"   Unfortunately backreferencing for field name seems not to be supported. Has anyone an idea? It needs to be added in a TA afterwards, hence eval etc. are not applicable  Thanks
Hello, I have some issues using following input configuration file for windows machine:   [monitor://T:\Toshtest\logs\test\log\*.log] index=test_toc source=toc:tos   I am not getting any error... See more...
Hello, I have some issues using following input configuration file for windows machine:   [monitor://T:\Toshtest\logs\test\log\*.log] index=test_toc source=toc:tos   I am not getting any errors, but it's not pulling any events. Thank you so much, any help will be highly appreciated.    
Hello, I am currently sending data to my Splunk Server via UDP. I am looking to migrate to Splunk Cloud, so I got the Splunk Cloud Free Trial. I noticed that in Splunk Cloud there is currently no wa... See more...
Hello, I am currently sending data to my Splunk Server via UDP. I am looking to migrate to Splunk Cloud, so I got the Splunk Cloud Free Trial. I noticed that in Splunk Cloud there is currently no way to create a UDP listener to accept my data. What is the work around this?  -Marco  
Deployed a clustered Splunk Enterprise environment and we would like to check successful logins attempts from operating and supporting teams at Splunk Web interface (front-end) to check front-end uti... See more...
Deployed a clustered Splunk Enterprise environment and we would like to check successful logins attempts from operating and supporting teams at Splunk Web interface (front-end) to check front-end utilization. Current solution has 3 search heads and 3 indexers. Where and how should we retrieve such information from? Thanks.
Hello, I have an indexed list of internal IPs that I have been able to get a count for based on a CIDR list on a CSV Lookup table: sourcetype="rapid7:insightvm:asset" "tags{}.name"="*" index=rapid... See more...
Hello, I have an indexed list of internal IPs that I have been able to get a count for based on a CIDR list on a CSV Lookup table: sourcetype="rapid7:insightvm:asset" "tags{}.name"="*" index=rapid7 | dedup id | fields ip | lookup GracoSubnet2.csv CIDR_range as ip OUTPUT CIDR_range as CIDR Latitude Longitude | where CIDR != "NONE" | stats count, values(ip), as Unique_IP by CIDR This gives me a great table with a count based on how many Unique internal IPs fall under one CIDR range.  I have been unable to figure out how to take that count and reference the Lat and Lon data for those CIDRs that are in the CSV Lookup table.  Any Ideas?
Hi, Have anyone found a way to change default language (english) when using splunk js in your own web( outside of splunk web)? By using spkunk view components, it is only in english ? I have been s... See more...
Hi, Have anyone found a way to change default language (english) when using splunk js in your own web( outside of splunk web)? By using spkunk view components, it is only in english ? I have been searching through official document but could not find any useful information. Thanks ! It s an example of TimeRangeView in english. I want to change it to french.   // Create a custom time range picker // Show the Presets panel, hide the Real-time and Advanced panels var mypresetsettings = { showCustomDateTime: false, showPresets: false, showCustomRealTime: false, showCustomAdvanced: false, showCustomRelative: false, showCustomDate: true }; // Instantiate a view using the custom time range picker var mytimerange_custom = new TimeRangeView({ id: "example-timerange_custom", managerid: "search3", earliest_time: "-7d", latest_time:"now", dialogOptions: mypresetsettings, el: $("#mytimerangeview_custom") }).render(); console.log(mytimerange_custom.val()); // Update the search manager when the time range changes mytimerange_custom.on("change", function () { mysearch3.settings.set(mytimerange_custom.val()); //mysearch4.setting.set(mytimerange_custom.val()); });  
To best of my knowledge only Admins can add Index or Indexer correct? I am having tough time to create an alert to see who adds new index or Indexers to my Splunk Enterprise or ES. Your help is appre... See more...
To best of my knowledge only Admins can add Index or Indexer correct? I am having tough time to create an alert to see who adds new index or Indexers to my Splunk Enterprise or ES. Your help is appreciated in advance.