All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As I said earlier, you can use CSS - follow the example in this reply Re: How to color the columns based on previous co... - Splunk Community
Updated the Slunk Palo alto app on a search head and i`m getting these error messages in the _internal index. Any clues? Splunk_TA_paloalto 8.1.1 Splunk core 9.0.3 04-08-2024 12:49:4... See more...
Updated the Slunk Palo alto app on a search head and i`m getting these error messages in the _internal index. Any clues? Splunk_TA_paloalto 8.1.1 Splunk core 9.0.3 04-08-2024 12:49:40.061 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=aperture: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:49:40.061 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=aperture: RequestsDependencyWarning) 04-08-2024 12:49:40.969 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=autofocus_export: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:49:40.969 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=autofocus_export: RequestsDependencyWarning) 04-08-2024 12:49:59.031 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=cortex_xdr: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:49:59.031 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=cortex_xdr: RequestsDependencyWarning) 04-08-2024 12:50:00.762 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=iot_security: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:50:00.762 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=iot_security: RequestsDependencyWarning) OS is ubuntu fully patched. 
Yep, that's the default self-signed cert that comes with Splunk like I suspected.  There's likely no way to fix that on a Cloud trial (and you'll have to disable SSL validation for testing) but you w... See more...
Yep, that's the default self-signed cert that comes with Splunk like I suspected.  There's likely no way to fix that on a Cloud trial (and you'll have to disable SSL validation for testing) but you won't have to do that on a production Splunk Cloud stack. 
You're not showing us the events. You're showing bits and pieces from separate events.
Hi @ITWhisperer  I used to this stanze to check the values are match.If i append in mvappend its showing both the values.How to set rules in dashboard.could you pls help on it. | eval match=if(So... See more...
Hi @ITWhisperer  I used to this stanze to check the values are match.If i append in mvappend its showing both the values.How to set rules in dashboard.could you pls help on it. | eval match=if(SourceFileDTLCount=TotalAPGLRecordsCountStaged,"Match","Not Match") | eval SourceFileDTLCount=mvappend(SourceFileDTLCount,match)
Hi @richgalloway thank you for the input. Do you have documentation references that you can point to?
Hello everyone! I need some help creating a multivalue field. Events can contain 1 or more fields with the following forms: I try to explain with an example Event1: FICHERO_LOG1 = /any/log1/id/i... See more...
Hello everyone! I need some help creating a multivalue field. Events can contain 1 or more fields with the following forms: I try to explain with an example Event1: FICHERO_LOG1 = /any/log1/id/idca-admin/idca-admin.log FICHERO_LOG2 = /any/log1/id/log1/any1.log FICHERO_LOG3 = /any/log1/httpd/*   Event2: FICHERO_LOG1 = /any/log2/id/id.log FICHERO_LOG2 = /any/log2/logging.log FICHERO_LOG3 = /any/log2/tree/httpd/ds/log2/* FICHERO_LOG4 = /any/log2/id/id-batch/id-batch2.log   eventN FICHERO_LOG1 = /any/logN/data1/activemq.log FICHERO_LOG2 = /any/logN/id/hss2/*.system.log ……… FICHERO_LOGN = /any/path1/id/…./*…..log   The result I expect is: For Event1   key values   LOG= /any/log1/id/idca-admin/idca-admin.log     /any/log1/id/log1/any1.log     /any/log1/httpd/*                for Event2:   key values   LOG= /any/log2/id/id.log     /any/log2/logging.log       /any/log2/tree/httpd/ds/log2/*     /any/log2/id/idca-batch/idca-batch2.log     For event N   key values   LOG= /any/logN/data1/activemq.log     /any/logN/id/hss2/*.system.log       …….     /any/path1/id/…./*…..log   I have tried with   transform.conf: [my-log] REGEX=^.*FICHERO_LOG.*\=\s*( ?<log>.*?)\s*\n MV-AD=true props.conf [extractingFields] TRANSFORM = other_transforms_stanza, my-log       But it's not working.   Any ideas or help? What steps should I follow?   Thanks JAR
Its showing in the events
Here is the response:    CONNECTED(00000005) depth=1 C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com verify error:num=19:self signed certifica... See more...
Here is the response:    CONNECTED(00000005) depth=1 C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com verify error:num=19:self signed certificate in certificate chain verify return:0 write W BLOCK Certificate chain 0 s:/CN=SplunkServerDefaultCert/O=SplunkUser i:/C=US/ST=CA/L=San Francisco/O=Splunk/CN=SplunkCommonCA/emailAddress=support@splunk.com 1 s:/C=US/ST=CA/L=San Francisco/O=Splunk/CN=SplunkCommonCA/emailAddress=support@splunk.com i:/C=US/ST=CA/L=San Francisco/O=Splunk/CN=SplunkCommonCA/emailAddress=support@splunk.com     Yes, the certs are from Splunk. Thank you
Hello All, we plan to use a Splunk OVA for VMware Metrics (5096) in combination with Splunk on Windows. I can't find any information how this OVA will be supported. E.g  Operating System and Splunk ... See more...
Hello All, we plan to use a Splunk OVA for VMware Metrics (5096) in combination with Splunk on Windows. I can't find any information how this OVA will be supported. E.g  Operating System and Splunk updates. Does anyone know this? Regards, Bernhard
Try without the search <input type="dropdown" token="color" depends="$color_dropdown_token$" searchWhenChanged="false"> <label>Color</label> <choice value="*">All</choice> <choice ... See more...
Try without the search <input type="dropdown" token="color" depends="$color_dropdown_token$" searchWhenChanged="false"> <label>Color</label> <choice value="*">All</choice> <choice value="Green">Green</choice> <choice value="Orange">Orange</choice> <choice value="Red">Red</choice> <initialValue>*</initialValue> </input>
Hi, thanks for the reply, But meanwhile I found another solution, I will try this solution next to see if it works.  
i am using below to load colur in drop downlist . Data loading propertly. but it always shows - Could not create search - No Search query provided   <input type="dropdown" token="color" depends="$c... See more...
i am using below to load colur in drop downlist . Data loading propertly. but it always shows - Could not create search - No Search query provided   <input type="dropdown" token="color" depends="$color_dropdown_token$" searchWhenChanged="false"> <label>Color</label> <choice value="*">All</choice> <choice value="Green">Green</choice> <choice value="Orange">Orange</choice> <choice value="Red">Red</choice> <initialValue>*</initialValue> <search> <query/> <earliest>$Time.earliest$</earliest> <latest>$Time.latest$</latest> </search> </input>      
As I explained earlier, you don't need to just look back further and further. The "issue" is to do with indexing lag. Whenever that lag spans a report time period boundary, you have the potential for... See more...
As I explained earlier, you don't need to just look back further and further. The "issue" is to do with indexing lag. Whenever that lag spans a report time period boundary, you have the potential for missed events. To mitigate this, you could use overlapping time periods, and use some sort of deduplication scheme, such as a summary index, if you want to avoid multiple alerts for the same event.
Thanks for your answer KothariSurbhi After some debugging Ive discovered that Splunk pulled logs again from many buckets from all kinds of different dates on February 23rd. It seems that logs who h... See more...
Thanks for your answer KothariSurbhi After some debugging Ive discovered that Splunk pulled logs again from many buckets from all kinds of different dates on February 23rd. It seems that logs who had already entered Splunk in 2023 entered again on February 23, 2024 for a reason that is still unclear. Nothing happened on the AWS side and the s3 buckets looks perfectly fine.    
I will try to search in the last 60 min by doing a throttle of the incidentId
Hello @matoulas Can you please elaborate the question? One liner isn't seems to define the actual problem
Hello @alexspunkshell, below search should give you list of all CIM Indexes Macro Definition -  | rest /servicesNS/-/-/admin/macros count=0 splunk_server=local | search title=cim*indexes | table tit... See more...
Hello @alexspunkshell, below search should give you list of all CIM Indexes Macro Definition -  | rest /servicesNS/-/-/admin/macros count=0 splunk_server=local | search title=cim*indexes | table title definition   Please accept the solution and hit Karma, if this helps! 
If your report runs every 15 minutes looking back 15 minutes, there will be boundary conditions where the event has a timestamp in the 15 minutes prior to the reported one, which didn't get indexed u... See more...
If your report runs every 15 minutes looking back 15 minutes, there will be boundary conditions where the event has a timestamp in the 15 minutes prior to the reported one, which didn't get indexed until this time period and therefore is missed
Timechart will be filling in the empty time slots with zeroes. Given that you have an error, I suspect that this part of the process hasn't been reached before the error, which is why these are missi... See more...
Timechart will be filling in the empty time slots with zeroes. Given that you have an error, I suspect that this part of the process hasn't been reached before the error, which is why these are missing from your final result.