All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Here are the essential parts you should consider using | eval match=if(SourceFileDTLCount!=TotalAPGLRecordsCountStaged, "RED") | eval SourceFileDTLCount=mvappend(SourceFileDTLCount,match) Obviously... See more...
Here are the essential parts you should consider using | eval match=if(SourceFileDTLCount!=TotalAPGLRecordsCountStaged, "RED") | eval SourceFileDTLCount=mvappend(SourceFileDTLCount,match) Obviously, change the tableCellColourWithoutJS to be the id of your panel <panel depends="$stayhidden$"> <html> <style> #tableCellColourWithoutJS table tbody td div.multivalue-subcell[data-mv-index="1"]{ display: none; } </style> </html> </panel> <format type="color"> <colorPalette type="expression">case (match(value,"RED"), "#ff0000")</colorPalette> </format>
The timeframe you showed (which I removed) applies to the search to dynamically populate the dropdown. Since you don't have a search to populate the dropdown (as shown by your error message), you don... See more...
The timeframe you showed (which I removed) applies to the search to dynamically populate the dropdown. Since you don't have a search to populate the dropdown (as shown by your error message), you don't need the timeframe here. You can still use the time in other parts of your dashboard.
As you mentioned i tried the mvappend the fields and its showing both the values in the table .The thing i need to show only when if it is not matched then i need to show the colours. | eval match=... See more...
As you mentioned i tried the mvappend the fields and its showing both the values in the table .The thing i need to show only when if it is not matched then i need to show the colours. | eval match=if(SourceFileDTLCount=TotalAPGLRecordsCountStaged, " ", if(SourceFileDTLCount!=TotalAPGLRecordsCountStaged, "Not Match","RED")) | eval SourceFileDTLCount=mvappend(SourceFileDTLCount,match)
i want to filter based on Timeframe. so i cant remove that
This may be a very simple question but I'm having trouble identifying the answer, I've been trying to find a way to use RUM data to identify and list out the slowest pages on a website using the Obse... See more...
This may be a very simple question but I'm having trouble identifying the answer, I've been trying to find a way to use RUM data to identify and list out the slowest pages on a website using the Observability dashboard, unfortunately, I don't seem to be able to drill down to any specific page using the dashboard. from what research I've done it seems like I may have to manually add in thousands of RUM URL groupings to drill down further but I have a feeling that that shouldn't be correct?
@gcusello Yeah, understood and did the same thankyou. @ITWhisperer  any idea, need help here So now i ingested the csv file, from this i am getting the  index=foo host=nx7503 source=C:/*/mkd.csv... See more...
@gcusello Yeah, understood and did the same thankyou. @ITWhisperer  any idea, need help here So now i ingested the csv file, from this i am getting the  index=foo host=nx7503 source=C:/*/mkd.csv Fields: Subscription Resource Key Vault Secret Expiration Date Months CSV file: Subscription  Resource  Key Vault  Secret  Expiration Date  Months BoB-foo  Dicore-automat  Dicore-automat-keycore Di core-tuubsp1sct  2022-07-28 -21 BoB-foo  Dicore-automat  Dicore-automat-keycore  Dicore-stor1scrt  2022-07-28 -21 BoB-foo  G01462-mgmt-foo  G86413-vaultcore  G86413-secret-foo  2022-09-01 -20 And from the lookup(foo.csv) Lookup: foo.csv Application environment appOwner Caliber Dicore - TCG foo@gmail.com Keygroup G01462 - QA goo@gmail.com Keygroup G01462 - SIT boo@gmail.com   when the "Expiration Date" match the "Resource" and "environment" trigger the alert and send mail to the respective emails(appOwner), how to get this.
Hi, I want to ingest the backup logs which are in Cloudwatch to Splunk using AWS addon. But I do not see any metric present in Add on to fetch these details. Under which metric these backlogs will ... See more...
Hi, I want to ingest the backup logs which are in Cloudwatch to Splunk using AWS addon. But I do not see any metric present in Add on to fetch these details. Under which metric these backlogs will be present? How can I get these logs to Splunk using Add on? Thank You!  
Hi Paul, Thankyou for your response,i have checked the link that you've given. I have tried with that, but that is not working for me. For eg: I want to onboard the data where it has "some... See more...
Hi Paul, Thankyou for your response,i have checked the link that you've given. I have tried with that, but that is not working for me. For eg: I want to onboard the data where it has "some message" in the events and rest to discard in the below event. Could you please suggest any solution for this 2023-01-31 10:39:58 message1 2023-01-31 10:40:01 message2 2023-01-31 10:40:08 message3 2023-01-31 10:40:08 message4 2023-01-31 10:40:00 some message 2023-01-31 10:40:01 some message in between 2023-01-31 10:40:01 some message in between 2023-01-31 10:40:01 some message in between 2023-01-31 10:40:01 message5 2023-01-31 10:40:01 message5
Please share the source code of your dashboard
The docs at https://docs.splunk.com/Documentation/Splunk/latest/Data/Applytimezoneoffsetstotimestamps#How_Splunk_software_determines_time_zones specify how Splunk determines the time zone for an even... See more...
The docs at https://docs.splunk.com/Documentation/Splunk/latest/Data/Applytimezoneoffsetstotimestamps#How_Splunk_software_determines_time_zones specify how Splunk determines the time zone for an event.  Note that the UI setting is not included.
@ITWhisperer , Here is the Data source for the source dashboard. index="xxx" appID="xxx" environment=xxx tags="*Parm*" OR "*Batch*" stepName="*" status=FAILED | rex field=stepName "^(?<Page>[^\:]... See more...
@ITWhisperer , Here is the Data source for the source dashboard. index="xxx" appID="xxx" environment=xxx tags="*Parm*" OR "*Batch*" stepName="*" status=FAILED | rex field=stepName "^(?<Page>[^\:]+)" | rex field=stepName "^\'(?<Page>[^\'\:]+)" | eval Page=upper(Page) | stats count(scenario) as "Number of Scenarios" by Page | sort - "Number of Scenarios"   Here is the Data source for the destination dashboard. index="xxx" appID="xxx" environment=xxx tags="*Parm*" OR "*Batch*" stepName="*" scenario="$scenariosTok$" status=FAILED | rex field=stepName "^(?<Page>[^\:]+)" | rex field=stepName "^\'(?<Page>[^\'\:]+)" | rex field=stepName "\:(?P<action>.*)" | search Page="$stepTok$" | eval Page=upper(Page) | stats list(action) as Actions by Page,scenario,error_log | rename Page as "Page(Step)",scenario as Scenarios,error_log as "Exceptions" | table Page(Step),Scenarios,Actions,Exceptions   Finally, Here are the data sources for the two drop downs added in the destination dashboard. Scenarios dropdown index="xxx" appID="xxx" environment=xxx tags="*Parm*" OR "*Batch*" stepName="*" scenario="*" status=FAILED | rex field=stepName "^(?<Page>[^\:]+)" | rex field=stepName "^\'(?<Page>[^\'\:]+)" | search Page="$stepTok$" | stats count by scenario   Page dropdown index="xxx" appID="xxx" environment=xxx tags="*Parm*" OR "*Batch*" stepName="*" status=FAILED | rex field=stepName "^(?<Page>[^\:]+)" | rex field=stepName "^\'(?<Page>[^\'\:]+)" | search scenario="$scenariosTok$" | stats count by Page    
Please share the source of your dashboard in a code block so we can see what you have attempted.
Hi @ITWhisperer , I tried the token, but it is not working as expected. Could you please assist on this?   Here is the redirection destination dashboard below.    
As I said earlier, you can use CSS - follow the example in this reply Re: How to color the columns based on previous co... - Splunk Community
Updated the Slunk Palo alto app on a search head and i`m getting these error messages in the _internal index. Any clues? Splunk_TA_paloalto 8.1.1 Splunk core 9.0.3 04-08-2024 12:49:4... See more...
Updated the Slunk Palo alto app on a search head and i`m getting these error messages in the _internal index. Any clues? Splunk_TA_paloalto 8.1.1 Splunk core 9.0.3 04-08-2024 12:49:40.061 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=aperture: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:49:40.061 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=aperture: RequestsDependencyWarning) 04-08-2024 12:49:40.969 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=autofocus_export: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:49:40.969 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=autofocus_export: RequestsDependencyWarning) 04-08-2024 12:49:59.031 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=cortex_xdr: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:49:59.031 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=cortex_xdr: RequestsDependencyWarning) 04-08-2024 12:50:00.762 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=iot_security: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:50:00.762 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=iot_security: RequestsDependencyWarning) OS is ubuntu fully patched. 
Yep, that's the default self-signed cert that comes with Splunk like I suspected.  There's likely no way to fix that on a Cloud trial (and you'll have to disable SSL validation for testing) but you w... See more...
Yep, that's the default self-signed cert that comes with Splunk like I suspected.  There's likely no way to fix that on a Cloud trial (and you'll have to disable SSL validation for testing) but you won't have to do that on a production Splunk Cloud stack. 
You're not showing us the events. You're showing bits and pieces from separate events.
Hi @ITWhisperer  I used to this stanze to check the values are match.If i append in mvappend its showing both the values.How to set rules in dashboard.could you pls help on it. | eval match=if(So... See more...
Hi @ITWhisperer  I used to this stanze to check the values are match.If i append in mvappend its showing both the values.How to set rules in dashboard.could you pls help on it. | eval match=if(SourceFileDTLCount=TotalAPGLRecordsCountStaged,"Match","Not Match") | eval SourceFileDTLCount=mvappend(SourceFileDTLCount,match)
Hi @richgalloway thank you for the input. Do you have documentation references that you can point to?
Hello everyone! I need some help creating a multivalue field. Events can contain 1 or more fields with the following forms: I try to explain with an example Event1: FICHERO_LOG1 = /any/log1/id/i... See more...
Hello everyone! I need some help creating a multivalue field. Events can contain 1 or more fields with the following forms: I try to explain with an example Event1: FICHERO_LOG1 = /any/log1/id/idca-admin/idca-admin.log FICHERO_LOG2 = /any/log1/id/log1/any1.log FICHERO_LOG3 = /any/log1/httpd/*   Event2: FICHERO_LOG1 = /any/log2/id/id.log FICHERO_LOG2 = /any/log2/logging.log FICHERO_LOG3 = /any/log2/tree/httpd/ds/log2/* FICHERO_LOG4 = /any/log2/id/id-batch/id-batch2.log   eventN FICHERO_LOG1 = /any/logN/data1/activemq.log FICHERO_LOG2 = /any/logN/id/hss2/*.system.log ……… FICHERO_LOGN = /any/path1/id/…./*…..log   The result I expect is: For Event1   key values   LOG= /any/log1/id/idca-admin/idca-admin.log     /any/log1/id/log1/any1.log     /any/log1/httpd/*                for Event2:   key values   LOG= /any/log2/id/id.log     /any/log2/logging.log       /any/log2/tree/httpd/ds/log2/*     /any/log2/id/idca-batch/idca-batch2.log     For event N   key values   LOG= /any/logN/data1/activemq.log     /any/logN/id/hss2/*.system.log       …….     /any/path1/id/…./*…..log   I have tried with   transform.conf: [my-log] REGEX=^.*FICHERO_LOG.*\=\s*( ?<log>.*?)\s*\n MV-AD=true props.conf [extractingFields] TRANSFORM = other_transforms_stanza, my-log       But it's not working.   Any ideas or help? What steps should I follow?   Thanks JAR