All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We're updating our Linux Servers to Debian 12. A few host went "missing" afterwards in Splunk. While investigating into it I found out that they were in fact not missing, but they stopped writing lo... See more...
We're updating our Linux Servers to Debian 12. A few host went "missing" afterwards in Splunk. While investigating into it I found out that they were in fact not missing, but they stopped writing logfiles to /var/log. Seems like Debian switched to full journald, as I was promoted with this ReadMe in /var/log: You are looking for the traditional text log files in /var/log, and they are gone? Here's an explanation on what's going on: You are running a systemd-based OS where traditional syslog has been replaced with the Journal. The journal stores the same (and more) information as classic syslog. To make use of the journal and access the collected log data simply invoke "journalctl", which will output the logs in the identical text-based format the syslog files in /var/log used to be. For further details, please refer to journalctl(1). [...]  Of course we can simply install the rsyslog package again, but this is feels more like a step backwards. So here is my question: Is there a default and generic approach for collecting all system and service logs from journald that we can use on our UFs, since Logfiles are obviously not the future on Linux. Best regards
Hi Team, I am getting raw log as below: 2023-07-22 09:18:19.454 [INFO ] [Thread-3] AssociationProcessor - compareTransformStatsData : statisticData: StatisticData [selectedDataSet=0, rejectedDataSe... See more...
Hi Team, I am getting raw log as below: 2023-07-22 09:18:19.454 [INFO ] [Thread-3] AssociationProcessor - compareTransformStatsData : statisticData: StatisticData [selectedDataSet=0, rejectedDataSet=0, totalOutputRecords=19996779, totalInputRecords=0, fileSequenceNum=0, fileHeaderBusDt=null, busDt=07/21/2023, fileName=SETTLEMENT_TRANSFORM_MERGE, totalAchCurrOutstBalAmt=0.0, totalAchBalLastStmtAmt=0.0, totalClosingBal=8.933513237882E10, sourceName=null, version=1, associationStats={}] ---- controlFileData: ControlFileData [fileName=SETTLEMENT_TRANSFORM_ASSOCIATION, busDate=07/21/2023, fileSequenceNum=0, totalBalanceLastStmt=0.0, totalCurrentOutstBal=0.0, totalRecordsWritten=19996779, totalRecords=0, totalClosingBal=8.933513237882E10] I want to show each count separately how can we show that: totalOutputRecords=19996779, totalClosingBal=8.933513237882E10 How can we create query like this: index= "abc" sourcetype = "600000304_gg_abs_ipc2" "AssociationProcessor  
Hi Team, I am getting these two logs on daily basis: 2023-07-17 08:05:59.764 [INFO ] [Thread-3] TransformProcessor - Started ASSOCIATION process for BusDt=07/16/2023, & version=1 2023-07-17 08:... See more...
Hi Team, I am getting these two logs on daily basis: 2023-07-17 08:05:59.764 [INFO ] [Thread-3] TransformProcessor - Started ASSOCIATION process for BusDt=07/16/2023, & version=1 2023-07-17 08:52:44.484 [INFO ] [Thread-3] AssociationProcessor - Successfully completed ASSOCIATION process!! isAssociationBalanced?=true 2023-07-18 08:04:59.764 [INFO ] [Thread-3] TransformProcessor - Started ASSOCIATION process for BusDt=07/17/2023, & version=1 2023-07-18 08:52:44.484 [INFO ] [Thread-3] AssociationProcessor - Successfully completed ASSOCIATION process!! isAssociationBalanced?=true I want to create one query where I can calculate average time between process start and complete  2023-07-17 08:05:59.764 [INFO ] [Thread-3] TransformProcessor - Started ASSOCIATION process for BusDt=07/16/2023, & version=1 2023-07-17 08:52:44.484 [INFO ] [Thread-3] AssociationProcessor - Successfully completed ASSOCIATION process!! isAssociationBalanced?=true My current query is this : index= "600000304_d_gridgain_idx*" sourcetype = "600000304_gg_abs_ipc2"  source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" Can someone guide me how to move forward and create average query.
Hi Folks   When I enter the Ingest actions page from our Splunk portal, we get the error shown below.   "Unable to load sourcetypes: An unexpected error occurred" I also attempted... See more...
Hi Folks   When I enter the Ingest actions page from our Splunk portal, we get the error shown below.   "Unable to load sourcetypes: An unexpected error occurred" I also attempted to clear the browser's cookies, which worked for a short time and returning to the same error page.   Is anyone aware of this problem? If this is the case, please provide an approach for eliminating it.   How to reproduce an issue.  Splunk Homepage > Setting > Ingest Action > Click on any rule  
Lets say my colddb space is 15TB  and volume datasize is 20TB as below (indexer.conf) what will be issues it may cause ? or it is ok ?   df -h | grep sde sde 8:64 0 32T 0 disk  -sde1   8:65 0 ... See more...
Lets say my colddb space is 15TB  and volume datasize is 20TB as below (indexer.conf) what will be issues it may cause ? or it is ok ?   df -h | grep sde sde 8:64 0 32T 0 disk  -sde1   8:65 0 15T 0 part /apps/splunk/colddb   On the Indexer Cluster Master server :  vi /apps/splunk/etc/master-apps/fmrei_all_indexes_frozen/local/indexes.conf [volume:secondary] path = /apps/splunk/colddb maxVolumeDataSizeMB = 20000000  
Hello Splunkers, Whats is "the best practice" to ingest DNS logs inside a distributed Splunk environment.  I hesitate between two possibilities (maybe there are others) : - Install a UF on my DNS s... See more...
Hello Splunkers, Whats is "the best practice" to ingest DNS logs inside a distributed Splunk environment.  I hesitate between two possibilities (maybe there are others) : - Install a UF on my DNS servers and simply monitor the path where my DNS logs are located and then forward the logs to my Splunk env. -  Or use the Stream App, which seems a little bit more complicated : https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/AboutSplunkStream Let me know what you used / think about that, Thanks a lot ! GaetanVP  
Dears, We would like to report an issue related to Splunk-ES during the navigation of the “Search” window. We are not able anymore to: Save any INTERESTING FIELDS in SELECTED FIELDS once a field ... See more...
Dears, We would like to report an issue related to Splunk-ES during the navigation of the “Search” window. We are not able anymore to: Save any INTERESTING FIELDS in SELECTED FIELDS once a field is selected for a future search.   Keep the selected “Mode” in “Search” windows once we open a new “Search” Window.   Have you also encountered this problem? Any solution? Many thanks for your help.
Hi,   I heard through the grapevine that APM agent can now connect directly to ThousandEyes.  Is this true?   And is there any instructional documentation that shows how to configure the agents to ac... See more...
Hi,   I heard through the grapevine that APM agent can now connect directly to ThousandEyes.  Is this true?   And is there any instructional documentation that shows how to configure the agents to achieve it ?
Hi All, We are basically forwarding the cloudflare firewall events to Splunk, we have enabled "payload logging" to view what payload was send by the user. Unfortunately the payload data which is ... See more...
Hi All, We are basically forwarding the cloudflare firewall events to Splunk, we have enabled "payload logging" to view what payload was send by the user. Unfortunately the payload data which is forward to splunk is encrypted and we are not sure what tool to use to decrypt it. We do hold this private keys with us, but how to decrypt that in the splunk search is the question. We tried installing DECRYPT2 APP on Splunk but that is also of no help.   Has anyone come across this type of issues and how have they fixed it. Request someone to suggest how to proceed with this.
Hi, on appdynamics documentation there is an option  sim.cluster.logs.capture.enabled The documentation says "This option is disabled by default." and the default value is "true".  A little bit co... See more...
Hi, on appdynamics documentation there is an option  sim.cluster.logs.capture.enabled The documentation says "This option is disabled by default." and the default value is "true".  A little bit confusing, because logically sim.cluster.logs.capture.enabled - true  - means that log capturing is enabled So if I want to enable log capturing logs I must set the value to "False"? https://docs.appdynamics.com/appd/23.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/administer-the-cluster-agent/enable-log-collection-for-failing-pods
I have the following query displaying as a table in a classic dashboard:     | makeresults format=json data="[{\"item\":\"disk1\", \"size\":2147483648, \"size_pretty\":\"2 GB\"}, {\"item\":\"disk2... See more...
I have the following query displaying as a table in a classic dashboard:     | makeresults format=json data="[{\"item\":\"disk1\", \"size\":2147483648, \"size_pretty\":\"2 GB\"}, {\"item\":\"disk2\", \"size\":1099511627776, \"size_pretty\":\"1 TB\"}]" | table item size size_pretty     Now when you sort by "size" the table works as expected (2GB is smaller than 1TB). When you sort by "size_pretty" though, it of course will think that "1 TB" is first in order, followed by "2 GB" (lexicographic sort order).   What I would like however (purely about user experience) is to  1) Hide the "size" column as it will be pretty horrible to read 2) When the user clicks the "size_pretty"  column to sort the table, I want it to actually sort by "size" (up or down) - Even though that column is not visible to the user, meaning the output (sorted smallest to largest) would look like: item size_pretty disk 1 2 GB disk 2 1 TB   Is there any way to achieve this? Note that I am on Splunk Cloud, so I do not have access to the file system.   (if it can be done on a dynamic dashboard instead, i'd consider that)   Bonus points if I can also apply column formatting with a colour scale as you would on a normal table  
I am having trouble with ingesting my data into Splunk consistently. I have an XML log file that is constantly being written into (about 100 entry per minute) however,  when I search for the data in ... See more...
I am having trouble with ingesting my data into Splunk consistently. I have an XML log file that is constantly being written into (about 100 entry per minute) however,  when I search for the data in Splunk I am only seeing sporadic results of the data in Splunk where I see results for 10 minutes then nothing for the next 20 and so on and so forth .  I have my inputs and props config below.  inputs config: [monitor:///var/log/sample_xml_file.xml] disabled = false index = sample_xml_index sourcetype= sample_xml_st       props.conf: --------------------- [ sample_xml_st ] CHARSET=UTF-8 KV_MODE=xml LINE_BREAKER=(<log_entry>) NO_BINARY_CHECK=true SHOULD_LINEMERGE=FALSE TIME_FORMAT=%Y%m%d-%H:%M:%S TIME_PREFIX=<log_time> TRUNCATE=0 description=describing props config disabled=false pulldown_type=1 TZ=-05:00 --------------------- Sample xml log: <?xml version="1.0" encoding="utf-8" ?> <log>   <log_entry>     <log_time>20230724-05:42:00</log_time>     <description>some random data 1</description>   </log_entry>    <log_entry>     <log_time>20230724-05:43:00</log_time>     <description>some random data 2</description>   </log_entry>    <log_entry>     <log_time>20230724-05:43:20</log_time>     <description>some random data 3</description>   </log_entry> </log> And this xml log file gets constantly written into with the a new log_entry 
Hello Members, I have seen and used the accum command, but it does not quite give me what I want. I have this search below which gives me a line chart with event count over the time range: inde... See more...
Hello Members, I have seen and used the accum command, but it does not quite give me what I want. I have this search below which gives me a line chart with event count over the time range: index=main sourcetype=cisco:asa host=* message_id=113004 | eval Date=strftime(_time, "%Y-%m-%d %H:%M:%S") | timechart count BY message_id The graph type can be any type. I would like to get an accumulated total for a time period, like 24 hours, OK to count every hour, but show the accumulated count each hour, with the ending total for the time range, i.e. 24hr. Thanks for greate source of help here, eholz1
When I ran the following query:     index="myindex" sourcetype="hamlet" environment=staging | top limit=10 client | eval percent = round(percent) | rename client AS "Users", count AS "Requests"... See more...
When I ran the following query:     index="myindex" sourcetype="hamlet" environment=staging | top limit=10 client | eval percent = round(percent) | rename client AS "Users", count AS "Requests", percent AS "Percentage %"     I get these results: Users Requests Percentage % joe.smith@alora.com 118 21 martha.taylor@gmail.com 80 14 paul.gatsby@aol.com 68 12   What I want instead are these results Users Requests Percentage % joe.smith 118 21 martha.taylor 80 14 paul.gatsby 68 12   I hope this helps. Sorry if my original post was confusing. I appreciate your help. Thank you    
Currently we have Microsoft IIS Web-Servers out in the environment, but the fields they are logging is spotty. Is there any way to enable logging for all available fields?  We have a deployment serv... See more...
Currently we have Microsoft IIS Web-Servers out in the environment, but the fields they are logging is spotty. Is there any way to enable logging for all available fields?  We have a deployment server, would that be of help in this situation?   For context, I've included a list of some specific fields we're looking for below: date, time, c-ip, cs-username, s-ip, s-port, cs-method, cs-uri-stem, etc.
Hi guys! I have a static snapshot lookup that stores a lot of information about vulnerabilities actives on my hosts in mar/01. This SPL shows me a full list of the unique identifiers. | inputlooku... See more...
Hi guys! I have a static snapshot lookup that stores a lot of information about vulnerabilities actives on my hosts in mar/01. This SPL shows me a full list of the unique identifiers. | inputlookup gvul:collectMar.csv | table UniqID   This SPL shows me a list of the unique identifiers actives today earlyest=-1d index=myindex sourcetype=mysourcetype | table UniqID    My team works to fix this vulnerabilities, so I want a timechart to show the work progress, based on the snapshot lookup. I don care about new vulnerabilities since the snapshot. This is the SPL that I'm using to do this. earliest=1677719215 index=myindex sourcetype=mysourcetype | join type=inner UniqID [ | inputlookup gvul:collectMar.csv | table UniqID] | timechart span=1d count(UniqID)   So, is there a way to do this but not using a join statement?
My team needs to create a dashboard that monitors the number of DB connections per DB agent, and I'm pretty sure that I've found the metric for this in the metric browser under the DB tab (number of ... See more...
My team needs to create a dashboard that monitors the number of DB connections per DB agent, and I'm pretty sure that I've found the metric for this in the metric browser under the DB tab (number of DB nodes), but I'm unable to find it when I'm adding a widget. Not sure if there is some other way to monitor this or if I'm missing something obvious.
Hi community,  I have an issue where I am ingesting some xml data but the data coming in is very sporadic. Any idea what could be causing this issue?   
Im trying to show what users logged into AWS with an assigned role and what they accessed/changed. Is there a specific AWS audit log I need to have ingested? We have people making changes with no d... See more...
Im trying to show what users logged into AWS with an assigned role and what they accessed/changed. Is there a specific AWS audit log I need to have ingested? We have people making changes with no documentation of when they did a change or when they logged in.
hi, I have two KV_Store lookups as they are huge: * one is more than 250k rows * second and 65k rows.  In "250k" row lookup is only IP while in second one are IP CIDR+LIST So,  I do search li... See more...
hi, I have two KV_Store lookups as they are huge: * one is more than 250k rows * second and 65k rows.  In "250k" row lookup is only IP while in second one are IP CIDR+LIST So,  I do search like     | inputlookup list_250k | rename ip_cidr as ip | eval convert_ip=tostring(ip) | lookup list_65k ip_cidr AS convert_ip OUTPUT ip_cidr, list | where isNotNull(ip_cidr) | rename ip_cidr as found_in      I am getting results. I am curious are there any limits? if for example search is limited, would I see some error  (as there is no progress bar that it's working something)?