All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi folks, My use case is to change the width of the column in the classic dashboard's panel. I tried the below configuration in xml. But this method only works for up to four columns. Not working ... See more...
Hi folks, My use case is to change the width of the column in the classic dashboard's panel. I tried the below configuration in xml. But this method only works for up to four columns. Not working as expected for 13 columns where I want only the first four columns to be viewed without scrolling right. Any ideas?       <html depends="$alwaysHideCSSPanel$"> <style> #tableColumWidth table thead tr th:nth-child(1){ width: 80% !important; overflow-wrap: anywhere !important; } </style> </html> <table id="tableColumWidth">     Note: I also tried including more '#tableColumWidth table thead tr th:nth-child(n)'.    
hi team, Getting aws : metadata in Splunk Add-on for AWS but not reflecting Splunk App for AWS. I ensured by mentioned things that data is reflecting properly & indexing also. Ensure the metadata... See more...
hi team, Getting aws : metadata in Splunk Add-on for AWS but not reflecting Splunk App for AWS. I ensured by mentioned things that data is reflecting properly & indexing also. Ensure the metadata is being collected: First, check that the metadata is actually being collected by the Splunk Add-on for AWS. You can do this by searching for the data using the Splunk search language. For example, you can try running a search like this: index=* sourcetype=aws:metadata If you see results from this search, it means that the metadata is being collected and indexed by Splunk. Check that the metadata is being forwarded: If the metadata is being collected by the Splunk Add-on for AWS but not showing up in the Splunk App for AWS, it's possible that it's not being properly forwarded from the indexer to the search head. Check that the data is being forwarded by running the following search: index=* sourcetype=aws:metadata | stats count by sourcetype If you see a non-zero count for aws:metadata, it means that data is being forwarded. Check that the metadata is being indexed: If the metadata is being collected and forwarded, but still not showing up in the Splunk App for AWS, it's possible that it's not being properly indexed. You can check this by running the following search: index=* sourcetype=aws:metadata | table _raw If you see data in the _raw field, it means that the data is being indexed properly.
Je voulais savoir s’il est possible de faire l’intégration entre splunk 9 et firepower si oui j’aimerais avoir de la documentation dessus
Hi everyone, As usual, I have a strange question: I need to send a subset of the logs received from an appliance to an external SIEM via syslog, this appliance is a Mobileiron server with a Univers... See more...
Hi everyone, As usual, I have a strange question: I need to send a subset of the logs received from an appliance to an external SIEM via syslog, this appliance is a Mobileiron server with a Universal Forwarder embedded in it. I configured the Heavy Forwarder and sending syslogs works fine. However, I have the problem that all the logs from the source appliance are sent via syslog and not just a part of them as I would like. Usually the problem is solved by using _TCP_ROUTING and _SYSLOG_ROUTING in the inputs.conf. The problem is that the source server is a MobileIron appliance that sends logs through an embedded Universal Forwarder, where I cannot edit the configuration files by hand and therefore cannot enter parameters to select destinations for the various log types. Can anyone  hint to a workaround to send via syslog only two prefixed sourcetypes, keeping sending all logs to Indexers? Thanks in advance. Ciao. Giuseppe
what is the indexer acknowledgement  parameters in Outputs.conf?
if we are executing an eval statement to create a new field, will it be added to the data in the disk?
based on the search time which is best, stats or transaction.
Hello,   I developed an external lookup script in Python which makes an https API call using a password authentication. The lookup script read the password from a custom conf file. When I submitt... See more...
Hello,   I developed an external lookup script in Python which makes an https API call using a password authentication. The lookup script read the password from a custom conf file. When I submitted my app to Splunkbase the result was:     check_for_secret_disclosure Password is being stored in plain text. Client's secret must be stored in encrypted format. You can use this reference for manage secret storage https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/secretstorage/ File: appserver/static/javascript/views/app.js Line: 95     There is no problem to write the password in passwords.conf. I followed the example in Weather App Example The problem starts when I need to read the password from the Python external lookup script. Splunk general documentation suggests to use a client.connect Client.connect need a Splunk user authentication, so another secret! I can find a method to read the secret as the splunklib.searchcommands allows, for example. I have Splunk Enterprise, so I could leave the API password clear, but I would like to use the secretstorage as suggested. How can I fix this problem?   Thank you very much Kind Regards Marco
Hi Team, I have created an app in DS that has inputs.conf with monitor stanza ( to monitor .trc file). I have created a server class and mapped the App with the client. Now, no data is getting inde... See more...
Hi Team, I have created an app in DS that has inputs.conf with monitor stanza ( to monitor .trc file). I have created a server class and mapped the App with the client. Now, no data is getting indexed. No internal logs are generated for this configuration.  I have checked the file path and permission is correct. Kindly suggest what steps should I follow to troubleshoot this from UF server side.   Thanks  
Hi all,  I have a field named as item_description which is an array of decimal value, which represents the description of each item. I hope to transfer each value in item_description into tex... See more...
Hi all,  I have a field named as item_description which is an array of decimal value, which represents the description of each item. I hope to transfer each value in item_description into text string for each item. Original data:   | makeresults | eval item_name = "Name_1,Name_2,Name_3,Name_4,Name_5", item_description = "65_66_67,68_69_70,71_72_73,74_75_76,77_78_79" | makemv delim="," item_name | makemv delim="," item_description | eval mv_zipped=mvzip(item_name,item_description) | mvexpand mv_zipped | rex field=mv_zipped "(?P<ITEM_NAME>.*),(?P<ITEM_DESP>.*)" | makemv delim="_" ITEM_DESP | table _time ITEM_NAME ITEM_DESP       Although the purpose can be fulfilled by the following code.    | mvexpand ITEM_DESP | eval ITEM_DESP_char=printf("%c",ITEM_DESP) | eventstats list(ITEM_DESP_char) as ITEM_DESP_char by ITEM_NAME | eval ITEM_DESP_join=mvjoin(ITEM_DESP_char,"") | dedup ITEM_NAME _time | table _time ITEM_NAME ITEM_DESP_join   Output: _time ITEM_NAME  ITEM_DESP_join XXX Name_1 ABC YYY Name_2 DEF ZZZ Name_3 GHI 000 Name_4 JKL 111 Name_5 MNO If the item_description becomes very long(ex. lengh=50) and lots of items (ex. 50 items), the mvexpand command can't work properly with the output message below. Error message: command.mvexpand: output will be truncated at 28200 results due to excessive memory usage. Memory threshold of 500MB as configured in limits.conf / [mvexpand] / max_mem_usage_mb has been reached. Is there any other way to transfer decimal value into ASCII and make the output as a string without using mvexpand command? Thank you very much.
Hi! I need to connect the Hive and Splunk so that Splunk incidents are sent to the Hive. We have been trying to do this for a long time, but it doesn't work out, and we can't find a solution on the I... See more...
Hi! I need to connect the Hive and Splunk so that Splunk incidents are sent to the Hive. We have been trying to do this for a long time, but it doesn't work out, and we can't find a solution on the Internet/on the forum, could someone help? If necessary, I will provide you with any information
We have created base serach query but I required to created root search base on that .
Hi Team, I have forgotten your username or password. Please help me how to reset password in splunk enterprise.  
Hi I have a search that will display result that will fall under device1 and device2. If device1 i need to check lookup1 (and if device2 then lookup2) for a match deviceName=device , outputting C... See more...
Hi I have a search that will display result that will fall under device1 and device2. If device1 i need to check lookup1 (and if device2 then lookup2) for a match deviceName=device , outputting Code and doing a regex on Code to extract some values. The regex will be different for lookup1 and lookup2. Here is my code that isnt working (no results displayed):       <base-search replaced some details with '...' for security> <if device1> | lookup lookup1 device as device output CODE | mvexpand ... | mvexpand ... | where ...!= device and like(..., "...%") | rename ... as ... | eval LRD1=substr(..., 1, 4), LRD2=substr(...,1,4) <if device2> | lookup lookup2 device as device output CODE | search Node=o* | rex field=Description "(?<bearer>...)" | table *         After each lookup there are a few operations to perform dependant on which lookup table is searched. Both searches are working ok on their own just not combined. Thanks
Hello, I have a lookup table with numbers, where it checks the numbers that match the error_code 11. index="cdrs" "error_code"="11" "Destino"="*" | lookup DIDREPEP Destino OUTPUT Destino | t... See more...
Hello, I have a lookup table with numbers, where it checks the numbers that match the error_code 11. index="cdrs" "error_code"="11" "Destino"="*" | lookup DIDREPEP Destino OUTPUT Destino | table Destino But it shows some blank results because they are not in the lookup table. How can I do so that it only shows me the destination that is not in the search table?. thanks greetings.
Hi, I'm running a custom command in splunk that uses the pynacl library. This library seems to work fine up until a file called _sodium.pyd within the same directory is called. Not sure why but it... See more...
Hi, I'm running a custom command in splunk that uses the pynacl library. This library seems to work fine up until a file called _sodium.pyd within the same directory is called. Not sure why but it seems like Splunk doesn't like that file type and throws an error saying that file doesn't exist (see picture attached). I've made sure the file does in fact exists, the path to the file is correct, and have tried redownloading the library a few times but no dice. If anyone has run into a problem like this before, can you please let me know how you fixed it? Thanks in advance!    
Currently I have a search and report that is producing the live data we need.  It is names as an example: "look now"...two words with a space in between.  So if I want to check if a host is on line w... See more...
Currently I have a search and report that is producing the live data we need.  It is names as an example: "look now"...two words with a space in between.  So if I want to check if a host is on line we type in the search: look now hostname splunkserver1      We get current IP, MAC, Hostname, user etc. Rather than doing this one at a time I would like to load a csv file with like 200 hostnames then have Splunk look up that information in the CSV file, match it to what is live (one the network) then provide me with the I information; IP, MAC, Hostname, user etc. I have successfully got a list to run but it was only the csv file.  No matching.   Then tried this      look now = ([| inputlookup hostlu.csv |fields hostname]) and got the list but only the host name without IP, mac etc and the error:  Error in 'search' command: Unable to parse the search: Comparator '=' has an invalid term on the right hand side look now  {can look at all the hosts that are on or have been on.} hostlu.csv {the file that will be loaded manually daily with just the host names}   Thanks for the help in advance.
Hello, I'm having a hard time figuring this out, I just want a status icon that changes color based on search results. 1 = Red 0 = Green How would I go about adding a threshold based icon to ... See more...
Hello, I'm having a hard time figuring this out, I just want a status icon that changes color based on search results. 1 = Red 0 = Green How would I go about adding a threshold based icon to Dashboard Studio? Thank you for any help on this one, Tom
We are facing data quality issue  Sample internal log: WARN messages in splunkd logs as follows DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (40) characters of eve... See more...
We are facing data quality issue  Sample internal log: WARN messages in splunkd logs as follows DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (40) characters of event. Defaulting to timestamp of previous event (Thu May 4 13:02:33 2023).  I saw some different kind of logs are reporting to production splunk Sample log of production: ++++++++++++++++++++++++++++++++++++++++++++++++ + New app   'esign-ec-api'                                                    + ++++++++++++++++++++++++++++++++++++++++++++++ + Initializing app  'esign-ec-api'                                                    + ++++++++++++++++++++++++++++++++++++++++++++++ Pinging the JVM took 7 seconds to respond. ++++++++++++++++++++++++++++++++++++++++++++++ + starting app  'esign-ec-api'                                                    + ++++++++++++++++++++++++++++++++++++++++++++++ [props] Charset=UTF-8 TIME_PREFIX=^\w{4,7}\s+ TIME_FORMAT=%Y-%m-%d %H.%M.%S,%3N MAX_TIMESTAMP_LOOKAHEAD=40 SHOULD_LINEMERGE=false NO_BINARY_CHECK=true TRUNCATE=50000 Pulldown_Type=True LINE_BREAKER=([\r\n])\w+\s+\d+\-\d+\-\d+\s\d+\:\d+\:\d+ EXTRACT-field1=regex EXTRACT-field2=regex Thank You
Bonjour, Je ne peux pas désactiver mon mot de passe cloud splunk ce qu'il faut faire?