All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  There are multiple ways to send data on splunk using azure blob storage. 1. Multiple events in a single file pushed to object store  2. Single event in a one single file  Now splunk can c... See more...
Hi  There are multiple ways to send data on splunk using azure blob storage. 1. Multiple events in a single file pushed to object store  2. Single event in a one single file  Now splunk can consume both approaches. Which options are better for splunk integration with azure blob storage/s3 ? what are the technical insights for both approaches ?
Hi, I have installed Splunk Enterprise and would like to use it for data collection using AWS Kinesis Firehose Add-on. I am able to access Splunk via localhost:8000 but when setting up the Splunk de... See more...
Hi, I have installed Splunk Enterprise and would like to use it for data collection using AWS Kinesis Firehose Add-on. I am able to access Splunk via localhost:8000 but when setting up the Splunk destination in AWS I am not sure as to what the value would be for HEC Endpoint. Do we need TLS certificates even for localhost? If so, I read in the documentation(setting up AWS Kinesis Firehose Add-on) that we cannot use self-signed certificates.  I would like to know: 1. What will be the HEC Endpoint when using localhost splunk? 2. How to get a TLS certificate for localhost if needed?   Thanks, Rithika
Hello everyone, I'm a newbie, so please be gentle. We are using Amazon Linux 2.  Our configuration has a Universal Forwarder co-hosted with a Jenkins controller node.  The UF is monitoring the log ... See more...
Hello everyone, I'm a newbie, so please be gentle. We are using Amazon Linux 2.  Our configuration has a Universal Forwarder co-hosted with a Jenkins controller node.  The UF is monitoring the log directories of the Jenkins host, and forwarding certain logs that match a directory traversal pattern.  There are thousands of files in the log directories, but the system was working fine until... On or about 19th June 2023, the host executed one of its regular 'yum update' cron jobs, after which point all the log files stopped flowing from the host to our Heavy Forwarder. We have done thorough investigation, and there are no symptoms coming from Amazon Cloudwatch that any of the hosts or networking links involved are remotely troubled by machine load.  Similarly, looking directly at netstat output doesn't imply the network is clogged. My question is, "Has anyone else had their Splunk environment go bad recently due to a recent yum update?" Hope someone can help, Mike
I extract with rex a field that contains numeric values, often with leading zeros. I want to display the values as strings, left aligned without getting leading zeros truncated. Example values: 0012... See more...
I extract with rex a field that contains numeric values, often with leading zeros. I want to display the values as strings, left aligned without getting leading zeros truncated. Example values: 00123, 22222, 12345_67 When showing these values in a dashboard table, the String values are interpreted as numbers, where possible, and I get                                  123                             22222 12345_67 Is there a way to get string values correctly displayed as such in a dashboard table? I tried also the dirty way as proposed here, prefixing the strings with space or non-breaking-space, but it does not work (I'm on Splunk 9.0.4). It works when adding a visible char (e.g. "x"), but that makes the table unusable.
Hi, I recently Install S plunk Enterprise on my Kali Linux. But it bypassed the credential login option. Now i cant login to the instance. Please help
With all the configuration proper in limits.conf as well as mmdb file updated in /opt/splunk/share/.  limits.conf [iplocation] db_path = /opt/splunk/share/GeoLite2-City-Latest.mmdb   Please ... See more...
With all the configuration proper in limits.conf as well as mmdb file updated in /opt/splunk/share/.  limits.conf [iplocation] db_path = /opt/splunk/share/GeoLite2-City-Latest.mmdb   Please help to get the solution for this. Thanks in advance
I am attempting to extract attachment fields from our email logs using regex. Attachments like .jpg, .png, pdf, etc. I have gone through the process of using the SPL field extracting feature however ... See more...
I am attempting to extract attachment fields from our email logs using regex. Attachments like .jpg, .png, pdf, etc. I have gone through the process of using the SPL field extracting feature however it usually results in only one attachment type being selected or another, if I try and select other attachment types the extraction fails. Any suggestions would be greatly appreciated. Thank you. 
Dear community, After i forwarded the syslog from Cisco ASA into SPLUNK i noticed that the logs are duplicated and this is consuming our license. Any help please ? Thank You  
I've been trying to solve this every which way and another and I always come up just short of the target. When searching linux audit log, the type=EXECVE has the most detailed information regarding ... See more...
I've been trying to solve this every which way and another and I always come up just short of the target. When searching linux audit log, the type=EXECVE has the most detailed information regarding commands executed. However, if you are interrested in anything other than the command/binary (a0), there will be field unspecific wildcard searches. Depending on the command and number of options there is a dynamic number of "aX"s where the total number equals another field value (argc) minus 1. For an event, argc=3 means that there are fields a0, a1, and a2 (three "arguments") What I wanted was a way to run a base search which returns a large number of events with varying number of "a" fields (a0 ... a(argc-1)) and preferably place these in a table with the correct observed maximum number of argument (aX) fields as columns dynamically. In other words NOT THIS:     index="linux" source="/var/log/audit/audit.log" type="EXECVE" | table a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 ......     This works, though I really don't like having to hardcode exessive values like this. What I would prefer is a way to, based on the fields observed in the base search, generate matching number (maximum) of columns (aX values) in a table. a0 a1 a2 a3 a4 a5 a6 value1 value2 value3 value4 value5 value6 value7   Maybe even merging the values of fields a0 to a(argc-1) into a single new field with the entire command. Command value1 value2 value3 value4 ... value7   I got started like this:     index="linux" source="/var/log/audit/audit.log" type="EXECVE" | eval argc_numeric = tonumber(argc) | eval args = mvrange(0, argc_numeric) | mvexpand args | dedup args | eval arg = "a" + tostring(args)     Which actually produces a field with the correct number of aX values (field names), though it feels like I'm taking a long way to produce something which is already present (field names) in the base search. And I have no idea how to use the "arg" values as input for table (or stats or whatever). I was thinking of something along the lines of     for i in arg: print(arg)     as input for table. Though this may be a horrible way to approach this problem. So, hopefully these chaotic notes is enough to kind of explain what I am trying to achieve and have not idea at all how to approach in a good and effective way. All suggestions are welcome
Hi, I had Splunk 9.05 and Syslog Conector for Splunk  (SC4S) 1.110 running and working for months. I just realized that there are not events ingested via HEC since two weeks ago. Both servers are i... See more...
Hi, I had Splunk 9.05 and Syslog Conector for Splunk  (SC4S) 1.110 running and working for months. I just realized that there are not events ingested via HEC since two weeks ago. Both servers are in the same subnet, no firewall in between. - Local firewall of the server has a rule for the incoming TCP 8088 traffic. (screenshot attached) - HEC enabled (global settings screenshot attached) - HEC token is correct. It is the same in the SC4S and Splunk. - netstat in the Splunk server shows listening in the port 8088. (attached) - ping from SC4S to Splunk and curl on port splunk:80 works fine, if I do port splunk:8088 it throws a timeout. (attached) - local firewall in SC4S firewall-cmd --list-all drop (active) target: DROP icmp-block-inversion: yes interfaces: eth0 sources: services: ssh syslog syslog-tls ports: 514/tcp 601/tcp protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks: echo-reply echo-request port-unreachable time-exceeded rich rules: any idea what else I could check? many thanks
Hi All, I have a dashboard with more than 100+ small panels showing the Red, Amber, and Green status of 100+ services. I have different hosts and sources for each service, so different queries for ... See more...
Hi All, I have a dashboard with more than 100+ small panels showing the Red, Amber, and Green status of 100+ services. I have different hosts and sources for each service, so different queries for each panel but the logic is the same. The problem is if we need to change the logic in the future we will have to change 100+ panel's queries which is quite hectic. Is there a way or template-like mechanism which we can use to make the changes in all the panels in one go? Thank you in advance.
I'm facing a weird issue. I'm not able to calculate percentage value when I use two variables/fields. I have a lookup file which looks something like -  sl,Service,x_value 1,X,0.211 2,other,0.190... See more...
I'm facing a weird issue. I'm not able to calculate percentage value when I use two variables/fields. I have a lookup file which looks something like -  sl,Service,x_value 1,X,0.211 2,other,0.190 3,Y,0 4,X,0.200 5,other,0.220 I'm trying to get two columns in my resultant table to show total by service and percentage by service, respectively. I've tried this -  percentage needs to be calculated using 2 fields whereas perc1 and perc2 are substituted with one of those two field values. While the perc1 and perc2 gets processed and displayed, percentage doesn't show up. I'm not sure what is that I'm doing wrong here. Can somebody please help?
Hi!   I'm trying to make a dual way to filter my table views work. I got the table view's drilldown to set my token, "text_source_user" to the value I click on. I also have a textbox which I set th... See more...
Hi!   I'm trying to make a dual way to filter my table views work. I got the table view's drilldown to set my token, "text_source_user" to the value I click on. I also have a textbox which I set the default value in the source code to "*" in order to show all users until I change the value in the textbox or click a cell in the table view.  My problem is that, both methods of setting the token work fine separately, but when added together, I can't seem to be able to set the value inside of the textbox to the one I just clicked inside of the table view cell. Is there any way to do this? I've been messing with the source code for a while now, tried to set the default value of the textbox to the token itself using $token$, set the default value for the token by adding "tokens" in the default stanza etc. AI doesn't seem familiar with anything other than XML dashboards and I wanted to keep in line with the newer offering. This is my first Splunk dashboard project and would really appreciate the help!  Thanks! 
Hi all. I’ve got an interesting case: $Customer using on-prem fully s3-compliant storage: DELL ECS When restarting the Cluster Master (and only the CM, not the Indexers) that triggers thousands of... See more...
Hi all. I’ve got an interesting case: $Customer using on-prem fully s3-compliant storage: DELL ECS When restarting the Cluster Master (and only the CM, not the Indexers) that triggers thousands of timeout events from S3. Is like the S3 cluster is having a denial-of-service attack This is ONLY happening since we upgraded from 8.2.6 to Splunk 9.0.4 in April 29th. No issues before. See screenshot below. What exactly are the Indexers requesting from s3 when the CM is restarted? How is this process different in Splunk 8.2.6 and Splunk 9.0.4? Regards, J  
Hello,   I have this column named pverProduct. For some reason the header of the csv file got in the middle of the result. Is there a way to remove the unwanted row(pverProduct)? index="" host= so... See more...
Hello,   I have this column named pverProduct. For some reason the header of the csv file got in the middle of the result. Is there a way to remove the unwanted row(pverProduct)? index="" host= sourcetype=csv source=C:\\ | table pverProduct | dedup pverProduct      
Hi I use a | stats min(_time) as time_min stats max(_time) as time_max command in my search The time is displayed in Unix format Example : Time_min=1688019886.761 Time-max=1690461727.136 I have... See more...
Hi I use a | stats min(_time) as time_min stats max(_time) as time_max command in my search The time is displayed in Unix format Example : Time_min=1688019886.761 Time-max=1690461727.136 I have added an eval time=strftime(_time, "%d-%m-%Y %H:%M" before the stats in order to convert the time but the result is sometimes strange because the max time is older than the min time How to convert the time properly please?
      2023-07-28 14:54:33,274 level=ERROR pid=80142 tid=MainThread logger=splunk_ta_o365.modinputs.message_trace pos=__init__.py:run:376 | datainput=b'message' start_time=1690530870 | message="... See more...
      2023-07-28 14:54:33,274 level=ERROR pid=80142 tid=MainThread logger=splunk_ta_o365.modinputs.message_trace pos=__init__.py:run:376 | datainput=b'message' start_time=1690530870 | message="An error occurred while collecting data" stack_info=True Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/message_trace/__init__.py", line 371, in run self._collect_events(app) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/message_trace/__init__.py", line 145, in _collect_events self._get_events_continuous(app) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/message_trace/__init__.py", line 216, in _get_events_continuous self._process_messages(start_date, end_date) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/message_trace/__init__.py", line 283, in _process_messages message_response = self._get_messages(microsoft_trace_url) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/message_trace/__init__.py", line 270, in _get_messages raise e File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/message_trace/__init__.py", line 262, in _get_messages response.raise_for_status() File "/opt/splunk/etc/apps/splunk_ta_o365/lib/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: for url: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2023-07-23T07:32:50Z'%20and%20EndDate%20eq%20datetime'2023-07-23T08:32:50Z'       Can anyone know how to fix this. I try all solutions in splunk community but its not working. Anyone can help me? Thank you for your help
Is it feasible to configure Splunk to authenticate with oracle databases using LDAP accounts?
Greetings I have a Heavy Fordwarder that constantly sends logs to the splunk cloud but I only receive the logs in the cloud at 09, 10 or 11 pm and then at 1 or 2 am the next day I get logs every 1 m... See more...
Greetings I have a Heavy Fordwarder that constantly sends logs to the splunk cloud but I only receive the logs in the cloud at 09, 10 or 11 pm and then at 1 or 2 am the next day I get logs every 1 minute. The source is a fortigate I have 4 nodes, 3 work perfectly and 1 is the one that is giving me problems. What could be happening?
I have a Splunk query that helps me to visualize different APIs vs Time as below. Using this query I could see each line graph for each APIs in the given time. index=sample_index |timechart span=1m... See more...
I have a Splunk query that helps me to visualize different APIs vs Time as below. Using this query I could see each line graph for each APIs in the given time. index=sample_index |timechart span=1m count by API   My actual requirement is to get the count by 2 fields (API and Consumer). ie I need a time graph for each API and Consumer combination. One graph for API1_Consumer1, one for API1_Consumer2, and one for API2_Consumer3 like that. How can I achieve that?