All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all fellow Splunkthiasts, I need your help in understanding what could possibly go wrong in HFW upgrade. We are collecting some logs through HFW and rely on search-time extractions for this sourc... See more...
Hi all fellow Splunkthiasts, I need your help in understanding what could possibly go wrong in HFW upgrade. We are collecting some logs through HFW and rely on search-time extractions for this sourcetype.  After upgrading HFW, extractions stopped working. Data format is basically several Key=Value pairs separated by newline. Investigating this, I found out: There is no props.conf configuration for this sourcetype on upgraded HFW (and it wasn't there even before upgrade). All relevant configuration is on another instance serving as both indexer and search-head. Props.conf in relevant sourcetype on Search-head has AUTO_KV_JSON=true, I don't see KV_MODE in splunk show config props / splunk btool props list (I suppose it takes default value "auto") I have already realized that upgrade didn't take the right path (it was 7.2.6 upgraded to 9.2.1, without gradually upgrading through 8.2). Except search-time extractions, everything seems to work as expected (data is flowing in, event breaking and timestamp extraction seems to be correct). What I don't understand is how can HFW upgrade even affect search-time extractions on another instance. From there I am a bit clueless on what to focus on to fix this. What I don't understand
Hi Team, There is a requirement  to get the license usage split in GB on daily basis for the top 20 log sources along with the host, index and sourcetype details.  So kindly help with the query.
Hello, We have some AWS accounts that use Firehose to forward logs from AWS to Splunk. A few days ago, I received a notification that the number of channels acquired started rapidly and hit the limi... See more...
Hello, We have some AWS accounts that use Firehose to forward logs from AWS to Splunk. A few days ago, I received a notification that the number of channels acquired started rapidly and hit the limit, since then we are not able to send logs any longer to Splunk. Splunk Support helped us to use a new Firehose endpoint, but we still see the ServerBusy because of the limited channels. Is there any option to monitor how our Streams are consuming the Channels and any advice you might have to improve this behaviour?
Hi @gcusello  i think i found my problem. I don't have open 9997 port on my forwarder server i guess. this is the screenshot. How can i open the 9997 port
Hi @madhav_dholakia  I'm not sure please remove the last line of my query , I mean this : subject = $email_subject // Use the dynamically generated subject then in the subject box in "Edit Aler... See more...
Hi @madhav_dholakia  I'm not sure please remove the last line of my query , I mean this : subject = $email_subject // Use the dynamically generated subject then in the subject box in "Edit Alert " put this : Alert: $email_subject$ OR $email_subject$  If it doesn't work put an image from edit alert section of your alert hear    
Ok. Now i run the telnet <my-forwarders-ip> 9997 command from my windows pc The result is "could not open connection to the host. port 9997 .connect failed"    i run for both private ip and public... See more...
Ok. Now i run the telnet <my-forwarders-ip> 9997 command from my windows pc The result is "could not open connection to the host. port 9997 .connect failed"    i run for both private ip and public ip. My windows firewall is disabled and my forwarders server doesn't even have firewall installed  
Hi all, I am trying to integrate MS SQL audit log data with a UF instead of DB Connect. What is the best and recommended way to do it that maps all fields? At the moment it is integrated with the ... See more...
Hi all, I am trying to integrate MS SQL audit log data with a UF instead of DB Connect. What is the best and recommended way to do it that maps all fields? At the moment it is integrated with the UF and using the "Splunk Add-on for Microsoft SQL Server" Also i am seeing one additional dummy event(no values nothing blank event) with every event that is coming My Inputs.conf    [WinEventLog://Security] start_from = oldest current_only = 0 checkpointInterval = 5 whitelist1 = 33205 index = test_mssql renderXml=false sourcetype = mssql:aud disabled = 0    
Hi @Cyner__ , you should run the telnet from the client, not from the Server: telnet my-private-ip 9997 If it doesn't answer there's something in the middle (e.g. personal firewalls) that block th... See more...
Hi @Cyner__ , you should run the telnet from the client, not from the Server: telnet my-private-ip 9997 If it doesn't answer there's something in the middle (e.g. personal firewalls) that block the connection. Ciao. Giuseppe
When you complete the setup it will create a local/passwords.conf. you're saying you don't have a passwords.conf so did you complete the setup? If so, did you set the default org/key or did you nam... See more...
When you complete the setup it will create a local/passwords.conf. you're saying you don't have a passwords.conf so did you complete the setup? If so, did you set the default org/key or did you name it something other than default? When you have default set, you don't have to specify the org or key with the openai/ChatGPT command.
ah also when i clicked "data summary" button from splunk enterprise web,  i only see "waiting for results" 
Following two error repeats every minute in splunkd.log on Splunk Enterprise What is causing this?   06-07-2024 10:45:00.314 +0200 ERROR ExecProcessor [2519201 ExecProcessorSchedulerThread] - mess... See more...
Following two error repeats every minute in splunkd.log on Splunk Enterprise What is causing this?   06-07-2024 10:45:00.314 +0200 ERROR ExecProcessor [2519201 ExecProcessorSchedulerThread] - message from "/data/splunk/bin/python3.7 /data/splunk/etc/apps/search/bin/quarantine_files.py" Quarantine files framework - Unexpected error during execution: Expecting value: line 1 column 1 (char 0) 06-07-2024 10:45:00.314 +0200 ERROR ExecProcessor [2519201 ExecProcessorSchedulerThread] - message from "/data/splunk/bin/python3.7 /data/splunk/etc/apps/search/bin/quarantine_files.py" Quarantine files framework - Setting enable_jQuery2 - Unexpected error during execution: Expecting value: line 1 column 1 (char 0)    
Thanks for the help @gcusello  But my problem is still occurs. When i use telnet with 9997 port to my computer (tried both private and public ip) telnet runs "connection timed out" error. i alread... See more...
Thanks for the help @gcusello  But my problem is still occurs. When i use telnet with 9997 port to my computer (tried both private and public ip) telnet runs "connection timed out" error. i already enabled receiving. I don't know if i enabled forwarder or not bu i Start'ed it with command and configured output and input file   This is inputs.conf: [monitor:///home/cowrie/cowrie/var/log/cowrie/cowrie.json] index = cowrie sourcetype = json disabled = false   this is output.conf: [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] disabled = false server = my-private-ip:9997   sorry if i missed something as i said im both new to linux and splunk
version 3.31    
Hi @shimada-k , Yes correct. you don't have the interface field in all the events so you cannot display it in all raws. Ciao. Giuseppe
Splunk to slack report integration not displaying all events in results from output. So we have report running which will have below records in output. But Splunk reports triggered to slack will just... See more...
Splunk to slack report integration not displaying all events in results from output. So we have report running which will have below records in output. But Splunk reports triggered to slack will just display only first record in alerts description\summary. How to get entire thing in alert summary\description. UnmappedActions test, some value  test, some value test, some value   base search | stats values(unmapped_actions) as UnmappedActions 
Hello @marysan - thanks for this. I have created this email_subject field and when used within Email Body $email_subject$, it worked fine but not when used in Email Subject. Can you please suggest i... See more...
Hello @marysan - thanks for this. I have created this email_subject field and when used within Email Body $email_subject$, it worked fine but not when used in Email Subject. Can you please suggest if I am missing something? | eval email_subject=MonthYear." - ".Customer." - ".CheckName." - ".Device   Thank you.
Thanks again, gcusello. Much appreciated. Do I need to add <"values.interface" AS interface> in rename, correct? I executed the following query. index=gnmi ("tags.next-hop-group"=* OR "tags.index"... See more...
Thanks again, gcusello. Much appreciated. Do I need to add <"values.interface" AS interface> in rename, correct? I executed the following query. index=gnmi ("tags.next-hop-group"=* OR "tags.index"=*) earliest="06/07/2024:08:28:14" | rename "tags.next-hop-group" AS tags_next_hop_group "tags.index" AS tags_index "tags.ipv4-entry_prefix" AS ipv4_entry_prefix "tags.network-instance_name" AS network_instance_name "values.interface" AS interface | eval tags_index=coalesce(tags_index, tags_next_hop_group) | stats values(ipv4_entry_prefix) AS ipv4_entry_prefix values(network_instance_name) AS network_instance_name values(interface) AS interface BY tags_index | sort ipv4_entry_prefix network_instance_name Then I received the following result.   My expectation is that "Ethernet48" appears in 1st and 2nd line. The data is as follows.       Many thanks, Kenji      
What version of the app do you have?  
Hi @Cyner__ , at first did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/Usingforwardingagents ? In other words: did you checked the open route between ... See more...
Hi @Cyner__ , at first did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/Usingforwardingagents ? In other words: did you checked the open route between UF and Splunk on port 9997 (default)? you can do this using telnet. did you enabled receiving in Splunk Enterprise ? [Settings > Forwardring and Receiving > Receiving] did you enabled forwarding in Universal Forwarder? When you did the above steps, you can check the connection using the following search index=_internal host=your_client_host) Ciao. Giuseppe
You could try something like this index=foo message="magic string" duration > [search index=foo message="magic string" | stats p99(duration) as search] | stats count as "# of Events with Duration > ... See more...
You could try something like this index=foo message="magic string" duration > [search index=foo message="magic string" | stats p99(duration) as search] | stats count as "# of Events with Duration > p99"