All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks all team i am using simple xml classic dasboard
@ITWhisperer is asking which paradigm is used in your dashboard.  Splunk has two very different ones. "Simple XML" is also called "Classic Dashboard" or "Dashboard Classic".  If you click "Dashboards... See more...
@ITWhisperer is asking which paradigm is used in your dashboard.  Splunk has two very different ones. "Simple XML" is also called "Classic Dashboard" or "Dashboard Classic".  If you click "Dashboards" tab in search app in 9.3, you'll see three panels like these Examples for Dashboard Studio Browse examples of dashboards & visualizations. Visit Example Hub Intro to Dashboard Studio Learn how to build dashboards with Dashboard Studio. Learn More   Intro to Classic Dashboards Learn how to build traditional Simple XML dashboards. Learn More Follow the links to learn about their respective capabilities and programming/learning costs. (Both provide some visual design tools, although some advanced features still require editing underlying codes.) If you are modifying an existing dashboard, search for your dashboard in this tab and look at "Type" column.
As I said above, the answer is no.  Splunk's interactions (better known in dashboard classic as "drilldowns") are based on selected row only.  If a transposed table suits your need, you can transpose... See more...
As I said above, the answer is no.  Splunk's interactions (better known in dashboard classic as "drilldowns") are based on selected row only.  If a transposed table suits your need, you can transpose, then interact with row.
Hi  Can anyone please advice the search query to find out overall health status of VMware using metric log. index - vmware_metric SPL - | mstats avg("vsphere.usage") prestats=true WHERE "index"=... See more...
Hi  Can anyone please advice the search query to find out overall health status of VMware using metric log. index - vmware_metric SPL - | mstats avg("vsphere.usage") prestats=true WHERE "index"="vmware-metrics" AND "host"="system1.local" AND ("host"="system2" OR "uuid"="12457896) span=10s | timechart avg("vsphere.vm.cpu.usage") AS Avg span=10s | fields - _span*  
Hi @Redwood  ( https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/Data/UsetheHTTPEventCollector) may i know why do you use the 8.0.2007 documentation pls.  unless you want a particular doc... See more...
Hi @Redwood  ( https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/Data/UsetheHTTPEventCollector) may i know why do you use the 8.0.2007 documentation pls.  unless you want a particular doc version, maybe pls use - "latest" instead of that version number, so you will get the right documentation  https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector#Configure_HTTP_Event_Collector_on_Splunk_Cloud_Platform Regarding the error, pls check the previous reply and let us know if its working or not, then we can troubleshoot further, thanks.    Best Regards Sekar
Hi @gcusello , i thought to test this but not sure how to test. will check the other options replied and update you back, thanks.    Best Regards Sekar
sorry for the lack of clarity. I mean, for example in the display there is a table with 3 columns. from there I can display certain information below the table depending on which table column I click... See more...
sorry for the lack of clarity. I mean, for example in the display there is a table with 3 columns. from there I can display certain information below the table depending on which table column I click. Is it possible?
oh! I see. How do i determine that? I thnik i should be using simple
Thankyou for your information, so i create that csv like this for sse_host_to_country : host,country host1.example.com,Japan host2.example.com,Malaysia host3.example.net,Australia host4.example.... See more...
Thankyou for your information, so i create that csv like this for sse_host_to_country : host,country host1.example.com,Japan host2.example.com,Malaysia host3.example.net,Australia host4.example.org,Singapore   And this for gdpr_user_category : user,category user1@example.com,User user2@example.com,Admin user3@example.com,PowerUser user4@example.com,User
Remove "authorisationheader" from the URL.  That's not a valid HEC URL. If that doesn't help, then please post the exact text of the error message(s) you see and also identify the "it" that tells yo... See more...
Remove "authorisationheader" from the URL.  That's not a valid HEC URL. If that doesn't help, then please post the exact text of the error message(s) you see and also identify the "it" that tells you the URL is incorrect.
@tscroggins Perfect. Worked smoothly. I took a long way as follows: | spath | rename message as _raw | extract | rex "\"sessionId\"\:\"(?<SessionID>.*?)\"\,\"clientTransactionId\"\:\"(?<Client... See more...
@tscroggins Perfect. Worked smoothly. I took a long way as follows: | spath | rename message as _raw | extract | rex "\"sessionId\"\:\"(?<SessionID>.*?)\"\,\"clientTransactionId\"\:\"(?<ClientTransactionId>.*?)\"\,\"transactionId\""
Hi all,  I am a bit of a newbie here, and am trying to setup HEC on splink cloud, however the URL I have created following the event collector documentation ( https://docs.splunk.com/Documentation/S... See more...
Hi all,  I am a bit of a newbie here, and am trying to setup HEC on splink cloud, however the URL I have created following the event collector documentation ( https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/Data/UsetheHTTPEventCollector)  doesn't appear to be working. Looking at the HEC dashboard occasionally there is some activity showing, but it tells me the URL is incorrect. I have tried numerous changes to the URL, and followed tons of advice on here, but nothing appears to be working. I am clearly missing something, and would really appreciate some guidance. https://http-inputs-myhostname.splunkcloud.com:443//services/collector/event/authorisationheader I have tried replacing event for raw, changed the port, although using a Splunk Cloud Platform instance rather then free trial. I have removed SSL and re-enabled. I would be very grateful of any advice and support here. Thank you    
Still the key_value part should be a proper object containing key-value pairs, not an embedded string. That makes no sense. Fix your data. This is incredibly common and in most cases, outside th... See more...
Still the key_value part should be a proper object containing key-value pairs, not an embedded string. That makes no sense. Fix your data. This is incredibly common and in most cases, outside the control of the destination. In the Logstash/Elasticsearch world, I'd parse the message field with a grok filter/processor followed by a json filter/processor to parse key_value into a JSON object. ("Elastic" translates to "overhead," but it's really just a trade-off relative to how Lucene works.) In the Splunk world, I'd leave it as is and use search-time field extractions, field aliases, etc. and accelerated data models.
Hello, I have a distributed Splunk architecture and I am trying to optimise/trim the received logs using Ingest actions features. However, I have the below error : - I tried to create new rule set ... See more...
Hello, I have a distributed Splunk architecture and I am trying to optimise/trim the received logs using Ingest actions features. However, I have the below error : - I tried to create new rule set on the Heavey forwarder and indexer , but it returned with the error message "this endpoint will reject all requests until pass4SymmKey has been properly set." So, I want to check where should I implement this feature on Indexer or HF? and is there any pre-request to implement it?
Please don't create duplicate threads on the same subject. You already asked about parsing HX events here https://community.splunk.com/t5/Deployment-Architecture/forwarded-events-and-field-extraction... See more...
Please don't create duplicate threads on the same subject. You already asked about parsing HX events here https://community.splunk.com/t5/Deployment-Architecture/forwarded-events-and-field-extraction/m-p/698436#M28532
First and foremost - this is not a json within a json. This is a json object embedded within something that resembles json  but is syntactically incorrect. I suspect you're getting that data with som... See more...
First and foremost - this is not a json within a json. This is a json object embedded within something that resembles json  but is syntactically incorrect. I suspect you're getting that data with some filebeat, logstash or similar tool. I'd try to fix the format to be a proper well-formed json. Then it "just works". EDIT: Ok, that's what you get when you're not posting raw data, but rather the preformatted output from webui. Still the key_value part should be a proper object containing key-value pairs, not an embedded string. That makes no sense. Fix your data.
An app is just a bunch of files. For field extractions they just contain a bunch of props/transforms settings. I'd still consider switching to a more sane reporting format first. For example - json.
Hi @jip31, In-browser performance monitoring is usually done with client-side JavaScript instrumentation. The instrumentation code would be included in your web site's source code. In Splunk Observ... See more...
Hi @jip31, In-browser performance monitoring is usually done with client-side JavaScript instrumentation. The instrumentation code would be included in your web site's source code. In Splunk Observability, this would be done with Splunk RUM. See e.g. https://docs.splunk.com/observability/en/gdi/get-data-in/rum/rum-instrumentation.html and https://docs.splunk.com/observability/en/gdi/get-data-in/rum/browser/get-browser-data-in.html. In AppDynamics, this would be with End User Monitoring. See e.g. https://docs.appdynamics.com/appd/24.x/latest/en/end-user-monitoring/browser-monitoring/browser-real-user-monitoring/inject-the-javascript-agent. Splunk Universal Forwarder includes the functionality required to index Windows event logs. Out of the box, the forwarder can be configured to collect Application, Forwarded Events, Security, Setup, and System event logs. See https://docs.splunk.com/Documentation/Forwarder/9.3.0/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller. Splunk Add-on for Microsoft Windows provides a comprehensive set of inputs, props, etc. for many Windows operating system, service, and application event logs. Start with https://splunkbase.splunk.com/app/742 and https://docs.splunk.com/Documentation/AddOns/released/Windows/AbouttheSplunkAdd-onforWindows.  
Hi @Codie, If your _raw value looks like this: { "@timestamp": "2024-09-05T10:59:34.826855417+10:00", "appName": "TestApp", "environment": "UAT", "ivUser": "Ashish", "level": "INFO", "l... See more...
Hi @Codie, If your _raw value looks like this: { "@timestamp": "2024-09-05T10:59:34.826855417+10:00", "appName": "TestApp", "environment": "UAT", "ivUser": "Ashish", "level": "INFO", "logger": "com.app.login", "message": "New user state created - state_id: XXXX-YYYYYY, key_id: twoFactorAuth, key_value: {\"tamSessionIndex\":\"1d1ad722-XXXX-11ef-8a2b-005056b70cf5\",\"devicePrint\":\"DDDDDDDDDDD\",\"createdAt\":\"2099-09-05T00:59:34.734404799Z\",\"updatedAt\":\"2099-09-05T00:59:34.734404799Z\",\"clientSessionId\":\"ppppppppppppp\",\"sessionId\":\"WWWWWWWWW\",\"clientTransactionId\":\"8fd2353d-d609-XXXX-52i6-2e1dc12359m4\",\"transactionId\":\"9285-:f18c10db191:XXXXXXXX_TRX\",\"twoFaResult\":\"CHALLENGE\",\"newDevice\":true,\"newLocation\":false,\"overseas\":true} with TTL: 46825", "parentId": "", "spanId": "14223cXXXX6d63d5", "tamSessionIndex": "1d1ad722-6b22-11ef-8a2b-XXXXXXX", "thread": "https-jsse-nio-XXXX-exec-6", "traceId": "66d90275ecc565aa61XXXXXXXX02f5815" } You should have a message field with value: New user state created - state_id: XXXX-YYYYYY, key_id: twoFactorAuth, key_value: {"tamSessionIndex":"1d1ad722-XXXX-11ef-8a2b-005056b70cf5","devicePrint":"DDDDDDDDDDD","createdAt":"2099-09-05T00:59:34.734404799Z","updatedAt":"2099-09-05T00:59:34.734404799Z","clientSessionId":"ppppppppppppp","sessionId":"WWWWWWWWW","clientTransactionId":"8fd2353d-d609-XXXX-52i6-2e1dc12359m4","transactionId":"9285-:f18c10db191:XXXXXXXX_TRX","twoFaResult":"CHALLENGE","newDevice":true,"newLocation":false,"overseas":true} with TTL: 46825 The key_value data may vary, and you'll need to adjust the regular expression as needed, but as a starting point, you can extract key_value (as message_key_value) and clientTransactionId cleanly in SPL using: | rex field=message "key_value: (?<message_key_value>\{.*\}) with TTL:" | spath input=message_key_value or | rex field=message "key_value: (?<message_key_value>\{.*\}) with TTL:" | eval clientTransactionId=json_extract(json(message_key_value), "clientTransactionId") or | eval clientTransactionId=json_extract(json(replace(message, ".* key_value: (\{.*\}) with TTL: .*", "\\1")), "clientTransactionId") or other variations.
If you want to provide both en-US and it-IT translations in your app, check https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/TranslateSplunk for the localization process.