All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello I need to build network topology of my infrastructure. How can get cdp or lldp commands result to splunk.
Hi, I want to index only name of the file generated and not the content of the file. EX: FILE1 I am ABCD. So i want to monitorand  index only FILE1 in splunk and ignore the content "i am ABCD". c... See more...
Hi, I want to index only name of the file generated and not the content of the file. EX: FILE1 I am ABCD. So i want to monitorand  index only FILE1 in splunk and ignore the content "i am ABCD". can this be done in splunk? If so how? Thanks!
Hi Everyone, I passed a token which contain a file path with some special character into a search but it does not show any result:   index=wineventlog EventCode=4660 OR EventCode=4663 Account_Name... See more...
Hi Everyone, I passed a token which contain a file path with some special character into a search but it does not show any result:   index=wineventlog EventCode=4660 OR EventCode=4663 Account_Name!="ANONYMOUS LOGON" host="MELFP" Account_Name!="*$" | eval ObjectName=urldecode("D:\Company Data\HR\Payroll\HR$ (MELFP02) (P) - Shortcut.lnk") | eval ObjectName=replace(ObjectName,"\\\\","\\\\\\") | where match(Object_Name,ObjectName) | table _time host Account_Name Account_Domain Object_Name Accesses EventCodeDescription | sort _time desc     However, If I compare directly as below then it would show result.   |search Object_Name="D:\\Company Data\\HR\Payroll\\HR$ (MELFP02) (P) - Shortcut.lnk"     Not sure why because if I shows the ObjectName, it is decoded correctly as below "D:\\Company Data\\HR\Payroll\\HR$ (MELFP02) (P) - Shortcut.lnk"  
Greetings, I want to use one base query for my dashboard, with time going back a couple months.   I thought I would populate one big search and and then have dashboards narrow it by relative time ch... See more...
Greetings, I want to use one base query for my dashboard, with time going back a couple months.   I thought I would populate one big search and and then have dashboards narrow it by relative time chunks for day/week/month, I tried using  | where but I noticed that   |  mysearch  earliest=08/10/2020:00:00:00 latest=@d | where _time>relative_time("@d", "-2d@d") does not generate results since relative_time() wants an epoch time as the first argument, which is lame since the other argument totally accepts relative snap-to time. Is there a way to do this?
Hi We have distributed Splunk Enterprise (1 SH, 2 Indexers, 2 Heavy forwarders) We would like to Install Salesforce App to Search Head and install Salesforce Add-on to a Heavy forwarder. But I don... See more...
Hi We have distributed Splunk Enterprise (1 SH, 2 Indexers, 2 Heavy forwarders) We would like to Install Salesforce App to Search Head and install Salesforce Add-on to a Heavy forwarder. But I don't know how to map the  index name "SFDC" between SH, indexers and HF ? does it automatic detect and assign with an index name "SFDC"? Should I manually create the index "SFDC" on all indexers and HF ? What is default index name when install Salesforce Add-on or Salesforce App? Thanks Khanh Le      
Heres what i'm trying to accomplish:  requestID               status 123456                   errored 321654                  Success 789456                 errored I'm Newbie, Maybe i'm goi... See more...
Heres what i'm trying to accomplish:  requestID               status 123456                   errored 321654                  Success 789456                 errored I'm Newbie, Maybe i'm going about this all wrong, and there maybe another way....but i don't think so based on what info i have. but heres what i got so far. I'm probably over-thinking this.  index=someindex sourcetype=sometype "request syntax" OR "error syntax" OR "success syntax" | rex field=_raw "request id: '(?<requestID>\d+)',\text" | rex field=_raw ".*(?<error>Error response received)\stext" | rex field=_raw ".*(?<Success>Database request executed):\stext" | eval requestID =if(requestID=(error),"Errored", "Success")
I am trying to use the Splunk Add-On Builder (v2.2.0) to build a TA to pull data via REST API in Splunk (v7.3.3).  I am not a developer but used it to build the most basic single input configuration ... See more...
I am trying to use the Splunk Add-On Builder (v2.2.0) to build a TA to pull data via REST API in Splunk (v7.3.3).  I am not a developer but used it to build the most basic single input configuration setup with a Global Account field. When I go to my TA (visible) and select Configuration, it stays stuck on "Loading".  I am using a global text box for my API key identifier and token value so in the configuration menu, I am unable to configure this.  I saw this post here and learned of a dev command. Output from the developer debug command   =================================================== Found scheme="hackerone". Locating script for scheme="hackerone"... No regular file="/opt/splunk/etc/apps/TA-ta-hackerone/linux_x86_64/bin/hackerone.sh". No regular file="/opt/splunk/etc/apps/TA-ta-hackerone/linux_x86_64/bin/hackerone.py". No regular file="/opt/splunk/etc/apps/TA-ta-hackerone/linux_x86_64/bin/hackerone.js". No regular file="/opt/splunk/etc/apps/TA-ta-hackerone/linux_x86_64/bin/hackerone". No script found in dir="/opt/splunk/etc/apps/TA-ta-hackerone/linux_x86_64/bin" No regular file="/opt/splunk/etc/slave-apps/TA-ta-hackerone/linux_x86_64/bin/hackerone.sh". No regular file="/opt/splunk/etc/slave-apps/TA-ta-hackerone/linux_x86_64/bin/hackerone.py". No regular file="/opt/splunk/etc/slave-apps/TA-ta-hackerone/linux_x86_64/bin/hackerone.js". No regular file="/opt/splunk/etc/slave-apps/TA-ta-hackerone/linux_x86_64/bin/hackerone". No script found in dir="/opt/splunk/etc/slave-apps/TA-ta-hackerone/linux_x86_64/bin" No regular file="/opt/splunk/etc/apps/TA-ta-hackerone/bin/hackerone.sh". Found script "/opt/splunk/etc/apps/TA-ta-hackerone/bin/hackerone.py" to handle scheme "hackerone". XML scheme path "/scheme/title": "title" -> "hackerone" XML scheme path "/scheme/description": "description" -> "Go to the add-on's configuration UI and configure modular inputs under the Inputs menu." XML scheme path "/scheme/use_external_validation": "use_external_validation" -> "true" XML scheme path "/scheme/streaming_mode": "streaming_mode" -> "xml" XML scheme path "/scheme/use_single_instance": "use_single_instance" -> "false" XML arg path "/scheme/endpoint/args/arg": "name" -> "name" XML arg path "/scheme/endpoint/args/arg/title": "title" -> "hackerone Data Input Name" XML arg path "/scheme/endpoint/args/arg": "name" -> "rest_version" XML arg path "/scheme/endpoint/args/arg/title": "title" -> "rest_version" XML arg path "/scheme/endpoint/args/arg/required_on_create": "required_on_create" -> "0" XML arg path "/scheme/endpoint/args/arg/required_on_edit": "required_on_edit" -> "0" XML arg path "/scheme/endpoint/args/arg": "name" -> "program_id" XML arg path "/scheme/endpoint/args/arg/title": "title" -> "program_id" XML arg path "/scheme/endpoint/args/arg/required_on_create": "required_on_create" -> "0" XML arg path "/scheme/endpoint/args/arg/required_on_edit": "required_on_edit" -> "0" XML arg path "/scheme/endpoint/args/arg": "name" -> "hackerone_api_credentials" XML arg path "/scheme/endpoint/args/arg/title": "title" -> "hackerone_api_credentials" XML arg path "/scheme/endpoint/args/arg/required_on_create": "required_on_create" -> "0" XML arg path "/scheme/endpoint/args/arg/required_on_edit": "required_on_edit" -> "0" XML arg path "/scheme/endpoint/args/arg": "name" -> "rest_endpoint" XML arg path "/scheme/endpoint/args/arg/title": "title" -> "rest_endpoint" XML arg path "/scheme/endpoint/args/arg/required_on_create": "required_on_create" -> "0" XML arg path "/scheme/endpoint/args/arg/required_on_edit": "required_on_edit" -> "0" Setting up values from introspection for scheme "hackerone". Setting "title" to "hackerone". Setting "description" to "Go to the add-on's configuration UI and configure modular inputs under the Inputs menu.". Setting "use_single_instance" to false. Setting "use_external_validation" to true. Setting "title" to "hackerone_api_credentials". Setting "required_on_create" to false. Setting "required_on_edit" to false. Setting "title" to "hackerone Data Input Name". Setting "title" to "program_id". Setting "required_on_create" to false. Setting "required_on_edit" to false. Setting "title" to "rest_endpoint". Setting "required_on_create" to false. Setting "required_on_edit" to false. Setting "title" to "rest_version". Setting "required_on_create" to false. Setting "required_on_edit" to false. Introspection setup completed for scheme "hackerone".  
Can we make the "show Source" option to display the log format same as the original log file?
Hi All, I am trying to extract fields using spath command. I noticed that fields with period in it cannot be extracted; as for the other fields without period are being extracted correctly. (EXAMPL... See more...
Hi All, I am trying to extract fields using spath command. I noticed that fields with period in it cannot be extracted; as for the other fields without period are being extracted correctly. (EXAMPLE FIELDS: action.email AND alert.suppress.period) Is there any workaround for this? Any help would be much appreciated. Thanks!   Here is my script: | rest /servicesNS/nobody/SA-ITOA/event_management_interface/correlation_search | eval value=spath(value,"{}") | mvexpand value | eval name = spath(value, "name") | eval search = spath(value, "search") | eval schedule = spath(value, "cron_schedule") | eval status = spath(value, "disabled") | eval send_email = spath(value, "action.email") | eval suppress_period = spath(value, "alert.suppress.period") | fields name, search, schedule, status, send_email, suppress_period
i have one event entry like this indexed using props.conf entry like below.  But this is not coming in json format its indexing only in raw format not sure why. Also because of that the column names... See more...
i have one event entry like this indexed using props.conf entry like below.  But this is not coming in json format its indexing only in raw format not sure why. Also because of that the column names ID, Name etc are not extracted automatically.  [{'ID': 123, 'Name': 'hostname', 'SetupComplete': True, 'Plugin': 'someplugin', 'PluginName': 'someplugin', 'DomainName': 'something', 'DomainEmail': '', 'dontknow': '', 'Address': '1.2.3.4', 'BackupIntervalString': 'Manual', 'LastBackupString': 'Never (1 uploaded)', 'LastBackupAttemptString': 'Never', 'NextBackupString': '', 'Protocol': 'scp', 'Location': '', 'BaselineState': 'N/A', 'LastBackupCompliant': False, 'LastBackupCompliantString': 'N/A', 'ComplianceScore': -1, 'RetryInterval': 45, 'NumRetries': 0, 'KeepVersions': 0, 'Owner': 'someone@something.com', 'State': 'Idle', 'Uptime': 'Not monitored', 'BackupStatus': 'OK', 'BackupDU': '100MB', 'Manufacturer': 'dontknow', 'Model': 'dontknow', 'AssetID': '', 'Serial': '', 'Firmware': '', 'ApprovedBackups': 0, 'CurrentApproved': False, 'NumBackups': 1, 'Disabled': 'No', 'DomainDisabled': False, 'ApprovedState': 'good', 'IsPush': False, 'Updated': '0001-01-01T00:00:00Z'}, Can you please help here.  [example_json] CHARSET = UTF-8 DATETIME_CONFIG = CURRENT KV_MODE = json TRUNCATE = 0 SEDCMD-removejunk1 = s/^\[//g LINE_BREAKER = ([\r\n,]*(?:{[^[{]+\[)?){'ID SHOULD_LINEMERGE = false SEDCMD-remove_end = s/]$//g NO_BINARY_CHECK = true disabled = false pulldown_type = true please tell me if i need to modify the props.conf entry or please help me with the extraction of fields. 
We are using the UF to monitor multiple files which are to be considered transient.  All we want to do is get them forwarded then remove them.  We are using the splunk list inputstatus command to ver... See more...
We are using the UF to monitor multiple files which are to be considered transient.  All we want to do is get them forwarded then remove them.  We are using the splunk list inputstatus command to verify files have been forwarded, but even after removing them from the monitored location, the files still show as been open for reading using the inputstatus command.  How long should it take for the UF to recognize the file is no longer present and should no longer to be monitored? Is there a way to explicitly remove a file from being monitored without needing to restart the UF or add files to the blacklist?  I see there is a remove operation under https://<server>:<port>/services/admin//monitor.  What does that do?
Hi, Data was indexed 4 hours ago. At the time i was able to see the data when searching the relevant index. 4 hours later that data is no longer there when running the same search. index=abc123 sou... See more...
Hi, Data was indexed 4 hours ago. At the time i was able to see the data when searching the relevant index. 4 hours later that data is no longer there when running the same search. index=abc123 source=mysource I can see other data in the index, and retention period is configured for 3 months. How can i view this data? What can i check to see  Splunk On prem - 7.3.15 Thanks PN   
I'm trying to create a search that always looks for the responses from the latest version of my app. The `version` field is already defined and the values are something like 1.0, 1.1 or 1.2. Current... See more...
I'm trying to create a search that always looks for the responses from the latest version of my app. The `version` field is already defined and the values are something like 1.0, 1.1 or 1.2. Currently, anytime I update my app I need to update my search query to look for the new version (version=1.3) I want to do something like "version=my_latest_version" where my_latest_version is a dynamic value that returns the max value of all current "version" field values. is this possible? Thanks!
Hi Everyone,   Can we make splunk to display the event in same format as the source log file? In other words can we have the exceptions printed in the same format as actual log file?
I have deployed the Splunk App for Windows Infrastructure and have been able to fix most of the issues so far with the dashboards and panels, but there is one dashboard that eludes me. Under Windows ... See more...
I have deployed the Splunk App for Windows Infrastructure and have been able to fix most of the issues so far with the dashboards and panels, but there is one dashboard that eludes me. Under Windows > Host Monitoring > Host Inventory, three of the fields are simply saying "Unknown": Computer Name, Service Pack, and Last installed update. I have found the HTML file that generates this page (I noticed you couldn't edit this dashboard like you usually can so I went into the actual files in the host server) but there isn't much I can do with it. The weird thing is that, when you look at the reference page for this dashboard, those same three fields show as "Unknown". Is this a known issue that has not been fixed and has no workarounds?
Hi I am facing a challenge with some of the splunk logs being merged as a one event. I have tried breaking them by updating below in splunk forwarder config but doesnt't work. can someone suggest ... See more...
Hi I am facing a challenge with some of the splunk logs being merged as a one event. I have tried breaking them by updating below in splunk forwarder config but doesnt't work. can someone suggest what i am missing here props.conf  in  local ########## APPLICATION SERVERS ###### [default] SHOULD_LINEMERGE = false [event_logservice] SHOULD_LINEMERGE = false LINE_BREAKER= (\d{4}-\d{2}-\d{2}\s+\d+:\d+:\d+.\d+\s+-\d+\s+Event) MAX_TIMESTAMP_LOOKAHEAD = 75 TRUNCATE = 0 Additional details : Logs are being written to files by logstash and then forwarder is reading and pushing data My log file : 2020-08-17 14:49:21.161 -0700 Event log_level="info" build_id="HEAD (d3b8457cc9)" bzdate="20200817" serial_no="KJST45HSS" register="ABC" sessionId="KJST45HSS_20200817_144739196_1" wid="H34-vx-841D6B9C-8158-4975-9AB3-FDB5E9FD80E8" component="Manager" message="adding " 2020-08-17 14:49:21.163 -0700 Event log_level="info" build_id="HEAD (d3b8457cc9)" bzdate="20200817" serial_no="KJST45HSS" register="ABC" sessionId="KJST45HSS_20200817_144739196_1" wid="H34-vx-841D6B9C-8158-4975-9AB3-FDB5E9FD80E8" component="Manager" message="adding completion " ** example above 2 rows and shown merged in splunk.. and it is happending randomly for other log events also.  
Hello I have noticed that in some of my dashboards, especially the more complicated ones with multiple sub searches that the custom cell renderer does not work as expected. Specifically the addCellRe... See more...
Hello I have noticed that in some of my dashboards, especially the more complicated ones with multiple sub searches that the custom cell renderer does not work as expected. Specifically the addCellRenderer method doesn't trigger a re-render as the documentation states: https://dev.splunk.com/enterprise/docs/developapps/visualizedata/displaydataview/howtocustomizetablecells/ I have verified this with console log, the constructor of the cell renderer is invoked but none of the other critical methods like setup and of course render ever run. I need to reference some field data in the renderer to apply styles, so I am waiting till data has been fetched for the particular table view before adding the cell renderer. Is this the issue?       searchResults.on("data", function() { if (searchResults.hasData()) { .. add fields to cell renderer object, addCellRenderer } }        Any help appreciated, thank you.
Is applying (retaining?) conditional numerical value-based field formatting after applying fieldformat that normalizes the values - an option? (It appears not?) no fieldformat: right-justfied as i... See more...
Is applying (retaining?) conditional numerical value-based field formatting after applying fieldformat that normalizes the values - an option? (It appears not?) no fieldformat: right-justfied as it should   after fieldformat: left justified Hope it's clear why neither option is satisfactory: fields with fieldformat applied - are left justified and with no (numerical) value-based conditional formatting making it counter-intuitive and generally unreadable. Fields w/o fieldformat: can quickly and irrevocably get unreadable as they can easily get into petabytes depending on the search timeframe and how busy the application has been. For comparison, this is from a much younger competing product enabling auto formatting and normalizing data in ways that appear to be missing in Splunk:   
I am getting attached error while configuring Splunk Event Ingestion integration in Servicenow.  -> verified the communication from MID server to Splunk ES is fine on port 8089. Connection is allowe... See more...
I am getting attached error while configuring Splunk Event Ingestion integration in Servicenow.  -> verified the communication from MID server to Splunk ES is fine on port 8089. Connection is allowed. -> A local user created with Analyst level privilege Can anyone tell me why we are getting 404 error and what needs to be checked in Splunk ES end?  
Hi  We're running JMS Modular Input (jmsta) on the Heavy Forwarder on Linux VM. We have 4 Heavy Forwarders with jmsta running on each, we use to get 3-4 millions records from one queue, it taking ... See more...
Hi  We're running JMS Modular Input (jmsta) on the Heavy Forwarder on Linux VM. We have 4 Heavy Forwarders with jmsta running on each, we use to get 3-4 millions records from one queue, it taking to much time dequeue data, how to increase the ingestion rate. already increased "-Xms512m","-Xmx1024m" @Damien_Dallimor