All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I want to know what is the difference between setting props.conf on the search head instead or on the cluster master in a distributed environment. (1 Search head, 1 CM, 2 IDX for example) I ha... See more...
Hi,  I want to know what is the difference between setting props.conf on the search head instead or on the cluster master in a distributed environment. (1 Search head, 1 CM, 2 IDX for example) I have to set an field extraction from custom logs, and so set EXTRACT-fields parameter in a props.conf with the corresponding regex and I was wondering this above question. If someone can help me to understand, it will be nice.  Thanks.  
Hi all, this is my scenario: Splunk Enterprise (8.0.5) installed on Premis on Linux Server SAML Authentication (that works well with company AD) Splunk Native Authentication always Active Sudd... See more...
Hi all, this is my scenario: Splunk Enterprise (8.0.5) installed on Premis on Linux Server SAML Authentication (that works well with company AD) Splunk Native Authentication always Active Suddenly, since last week, the url to login with splunk user (https://myserver/en-US/account/login?loginType=splunk) stopped working. If we try the splunk login url we receive "Bad Request" on browser. Splunk authentication config is always the same. Does anyone had the same problem? Thanks in advance Giorgio
I Have a look up file called dataset.csv which will have one field, dataset_name dataset1 dataset2 dataset3   I need to display the dataset_names in a table which ... See more...
I Have a look up file called dataset.csv which will have one field, dataset_name dataset1 dataset2 dataset3   I need to display the dataset_names in a table which are not present in the search results but present in lookup , i use the below query to retrieve all the datasets . but using lookup it does not show the missing names. QUERY :      index = dataIntegration source=piedpiper lambda_events| search event.proc_stat_cd = "SCSS" AND event.s3_location = "*"s3:*//lake/depositescrow*"*" AND "event.module_name"=EMR | rename event.dataset_id as "Id" |rename event.regrd_dataset_nm as Name |rename event.proc_ts as "DateTime"| table "Id" , "Name","DateTime"       I want the results like,how can i achieve this DatasetName FilesMoved Time dataset1 YES   dataset2 NO  
Hey There, i have n Systems. I would like to apply a rex query, where each fieldname contains the system ID which i have to extract from the logs too. Example:   ... | rex field=_raw "System(\d+... See more...
Hey There, i have n Systems. I would like to apply a rex query, where each fieldname contains the system ID which i have to extract from the logs too. Example:   ... | rex field=_raw "System(\d+)\sFailure\sRate\s(?<system_\1_failurerate>\d*)"   Unfortunately backreferencing for field name seems not to be supported. Has anyone an idea? It needs to be added in a TA afterwards, hence eval etc. are not applicable  Thanks
Hello, I have some issues using following input configuration file for windows machine:   [monitor://T:\Toshtest\logs\test\log\*.log] index=test_toc source=toc:tos   I am not getting any error... See more...
Hello, I have some issues using following input configuration file for windows machine:   [monitor://T:\Toshtest\logs\test\log\*.log] index=test_toc source=toc:tos   I am not getting any errors, but it's not pulling any events. Thank you so much, any help will be highly appreciated.    
Hello, I am currently sending data to my Splunk Server via UDP. I am looking to migrate to Splunk Cloud, so I got the Splunk Cloud Free Trial. I noticed that in Splunk Cloud there is currently no wa... See more...
Hello, I am currently sending data to my Splunk Server via UDP. I am looking to migrate to Splunk Cloud, so I got the Splunk Cloud Free Trial. I noticed that in Splunk Cloud there is currently no way to create a UDP listener to accept my data. What is the work around this?  -Marco  
Deployed a clustered Splunk Enterprise environment and we would like to check successful logins attempts from operating and supporting teams at Splunk Web interface (front-end) to check front-end uti... See more...
Deployed a clustered Splunk Enterprise environment and we would like to check successful logins attempts from operating and supporting teams at Splunk Web interface (front-end) to check front-end utilization. Current solution has 3 search heads and 3 indexers. Where and how should we retrieve such information from? Thanks.
Hello, I have an indexed list of internal IPs that I have been able to get a count for based on a CIDR list on a CSV Lookup table: sourcetype="rapid7:insightvm:asset" "tags{}.name"="*" index=rapid... See more...
Hello, I have an indexed list of internal IPs that I have been able to get a count for based on a CIDR list on a CSV Lookup table: sourcetype="rapid7:insightvm:asset" "tags{}.name"="*" index=rapid7 | dedup id | fields ip | lookup GracoSubnet2.csv CIDR_range as ip OUTPUT CIDR_range as CIDR Latitude Longitude | where CIDR != "NONE" | stats count, values(ip), as Unique_IP by CIDR This gives me a great table with a count based on how many Unique internal IPs fall under one CIDR range.  I have been unable to figure out how to take that count and reference the Lat and Lon data for those CIDRs that are in the CSV Lookup table.  Any Ideas?
Hi, Have anyone found a way to change default language (english) when using splunk js in your own web( outside of splunk web)? By using spkunk view components, it is only in english ? I have been s... See more...
Hi, Have anyone found a way to change default language (english) when using splunk js in your own web( outside of splunk web)? By using spkunk view components, it is only in english ? I have been searching through official document but could not find any useful information. Thanks ! It s an example of TimeRangeView in english. I want to change it to french.   // Create a custom time range picker // Show the Presets panel, hide the Real-time and Advanced panels var mypresetsettings = { showCustomDateTime: false, showPresets: false, showCustomRealTime: false, showCustomAdvanced: false, showCustomRelative: false, showCustomDate: true }; // Instantiate a view using the custom time range picker var mytimerange_custom = new TimeRangeView({ id: "example-timerange_custom", managerid: "search3", earliest_time: "-7d", latest_time:"now", dialogOptions: mypresetsettings, el: $("#mytimerangeview_custom") }).render(); console.log(mytimerange_custom.val()); // Update the search manager when the time range changes mytimerange_custom.on("change", function () { mysearch3.settings.set(mytimerange_custom.val()); //mysearch4.setting.set(mytimerange_custom.val()); });  
To best of my knowledge only Admins can add Index or Indexer correct? I am having tough time to create an alert to see who adds new index or Indexers to my Splunk Enterprise or ES. Your help is appre... See more...
To best of my knowledge only Admins can add Index or Indexer correct? I am having tough time to create an alert to see who adds new index or Indexers to my Splunk Enterprise or ES. Your help is appreciated in advance.
(This is a continuation of https://community.splunk.com/t5/Splunk-Search/Creating-a-search-that-looks-up-values-from-one-logfile-and/td-p/565097) So what I'm trying to do is compare values across tw... See more...
(This is a continuation of https://community.splunk.com/t5/Splunk-Search/Creating-a-search-that-looks-up-values-from-one-logfile-and/td-p/565097) So what I'm trying to do is compare values across two different logs that have a unique ID (see above link for more info). The problem I have is that the one log runs on a cronjob every 5 minutes, while the other log only runs when required. I'm using the below code:   index=foo (sourcetype=test1 OR sourcetype=test2) host=A* (source="/data/stuff/logfile1.log" OR source="/data/stuff/logfile2.log") | eval coalesce(lastupdate_direc,file1ID) | stats values(lastupdate_time) as lastupdate_time, values(file1ID) as file1ID by host, ID | eval int_time=strptime(lastupdate_time, "%F %H:%M") | eval timenow=now() | eval diff_new=timenow-int_time | eval days_since=((diff_new-14400)/60/60/24) | table lastupdate_time host name ID days_since   As I'm trying to be nice to my indexer, I'm only looking at the past 15 minutes (because of time drift between the servers) so I get multiple lastupdate_time entries in the table and because of that, the days_since field will be left blank. I've tried using chart latest of the values, but I get no results. Am I just not outputting to the correct visualization function or would something other than table be better? Thanks!
Dear Splunk Community, I need some help fetching data from a source, then use the results as a searchparameter for different other searches  and put all of the results into one statistics table. I ... See more...
Dear Splunk Community, I need some help fetching data from a source, then use the results as a searchparameter for different other searches  and put all of the results into one statistics table. I have (names have been changed): One index : [myIndex] One host : [myHost] Source one : [/xxx/xxx/xxxxx/xxxx/log/xxxxxxx/*/*.log] Source two : [/yyy/yyy/yyyyy/yyyy/log/yyyyyyyyy/firstlogfile.log] Source three : [/zzz/zzz/zzzzz/zzzz/log/zzzzzzzz/seconflogfile.log] I have the following dashboard: In the upper left you see a statistics table. A RUNID is basically a timestamp. I have a lot of different files that contain the RUNID. I collect all the files and then display each RUNID only once (so duplicates are not shown) using the following query:   index="myIndex" host="myHost" source="/xxx/xxx/xxxxx/xxxx/log/xxxxxx/*/*.log" | eval source=replace(source,"^/xxx/xxx/xxxxx/xxxx/log/xxxxxx/","") | eval source=replace(source,"/.*","") | stats values(source) by source | fields source | rename source AS "RUNID" | sort RUNID desc   When I click any RUNID another search is performed in the upper right (DATA) statistics table. This has the following query:   index="myIndex" host="myHost" source="/xxx/xxx/xxxxx/xxxx/log/xxxxxx/$tokenRUNID$/*.log" CTJT* $meldingen$ | fields _time, _raw | rename _time AS "Datum" | convert timeformat="%m-%d-%Y %H:%M:%S" ctime(Datum) | eval _raw = replace(_raw,"^[^#]*#", "") | rename _raw AS "Sensor Activiteit" | sort Datum desc    In the bottom (center) I have 3 single value fields that show ERRORS, WARNINGS and INFO. For each I have the following code (with the exception of the field ERROR/WARN/INFO) :    index="myIndex" host="myHost" source="/xxx/xxx/xxxxx/xxxx/log/xxxxxx/$tokenRUNID$/*.log" CTJT* AND CASE("ERROR") | stats count   And at last I have a single value field showing the profile:   index="myIndex" host="myHost" source="/yyy/yyy/yyyyy/yyyy/log/yyyyyyyyy/firstlogfile.log" OR source="/zzz/zzz/zzzzz/zzzz/log/zzzzzzzz/seconflogfile.log" $tokenRUNID$ "started with profile" | rex field=_raw "profile\s(?<whatever>[^\s\r]+)" | stats count by whatever | fields - count     So right now I have multiple data searches in different representations (single value fields, data tables etc.). I would like to create the following: A table with RUNID's where the table also shows the PROFILE, DATE and the WARNINGS, ERRORS and INFO counts right next to it. It should look like this, but populated: So I basically want to place multiple searches and results into one statistics table. I have tried playing with appendcols like described in this topic: https://community.splunk.com/t5/Splunk-Search/multiple-search-output-in-a-single-table-list-something/m-p/39644#M9063 But using index gives me the error  Unknown search command 'index'. How can I manage to get the above? Thanks in advance.    
Hi, Is trade-in value with Splunk Enterprise available when switching from a term perpetual license? Best regards, Pieter
I have a splunk forwarder setup in my server to forward all my logs to Splunk. I could not find a default message field in Splunk search. I would like the log string to be part of a specific field - ... See more...
I have a splunk forwarder setup in my server to forward all my logs to Splunk. I could not find a default message field in Splunk search. I would like the log string to be part of a specific field - 'message'. So, I decided to configure it in the splunk forwarder settings as follows: #/opt/splunkforwarder/etc/system/local/transforms.conf [message-transform] REGEX = * FORMAT = message::"$1" WRITE_META = true #/opt/splunkforwarder/etc/system/local/props.conf [host::*] TRANSFORMS-message = message-transform #/opt/splunkforwarder/etc/system/local/fields.conf [message] INDEXED=true But even with this configuration, I am not seeing the message field in my Splunk search. Am I missing anything?
Hello All, We added Webhook as one of the action item for Splunk report. But whenever search criteria is met we are not getting any hit on webhook. So we tried adding one more simple action which is... See more...
Hello All, We added Webhook as one of the action item for Splunk report. But whenever search criteria is met we are not getting any hit on webhook. So we tried adding one more simple action which is mail alert. Surprisingly I m receiving a mail once search criteria is met but no any hit on webhook. Webhook url is deployed in our internal network so are we supposed to open Firewall for Splunk cloud instance url? Also do Splunk team needs to add our internal IP any where? Please provide your valuable suggestion to get it worked. Please note that we are using Splunk cloud instance. Thanks in advance,    
I am trying "Prometheus Monitoring Extension" and facing FetchError.  FetchError: request to https://fra-ana-api.saas.appdynamics.com/events/schema/prometheus_events failed, reason: connect ETIMEDOU... See more...
I am trying "Prometheus Monitoring Extension" and facing FetchError.  FetchError: request to https://fra-ana-api.saas.appdynamics.com/events/schema/prometheus_events failed, reason: connect ETIMEDOUT 18.159.38.70:443 Prometheus Monitoring Extension Link: https://www.appdynamics.com/community/exchange/prometheus-monitoring-extension/
Identity: 314 assets are currently exceeding the field limits set in the Asset and Identity Management page. Data truncation will occur unless the field limits are increased. Sources: [merge].
Hi guys,    Probably very simple question but I just tangled myself in the logic.  I want to create 2 fields, one with today's date so I have that one | eval today_date=strftime(now(),"%B %d, %Y"... See more...
Hi guys,    Probably very simple question but I just tangled myself in the logic.  I want to create 2 fields, one with today's date so I have that one | eval today_date=strftime(now(),"%B %d, %Y") and the second one where I want to subtract 30days from that date. How do I get about it?
Hello there, In Cloud Splunk is there a way however an alert could be created for example: attacker logs in from London and the user is based in London, how do we identify them? As i know we can do... See more...
Hello there, In Cloud Splunk is there a way however an alert could be created for example: attacker logs in from London and the user is based in London, how do we identify them? As i know we can do this via Country level based on Geo Tagging of IPs but can we configure this to drill down at location level ? if so how? Many Thanks, Mozza
I can see that the Splunk App for CEF is announced EOL. Is a new app coming or has it been renamed or rebranded? Does anyone know of any other app that do the same job?   https://docs.splunk.com/... See more...
I can see that the Splunk App for CEF is announced EOL. Is a new app coming or has it been renamed or rebranded? Does anyone know of any other app that do the same job?   https://docs.splunk.com/Documentation/CEFapp/2.3.0/ReleaseNotes/Newfeatures