All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Remove "authorisationheader" from the URL.  That's not a valid HEC URL. If that doesn't help, then please post the exact text of the error message(s) you see and also identify the "it" that tells yo... See more...
Remove "authorisationheader" from the URL.  That's not a valid HEC URL. If that doesn't help, then please post the exact text of the error message(s) you see and also identify the "it" that tells you the URL is incorrect.
@tscroggins Perfect. Worked smoothly. I took a long way as follows: | spath | rename message as _raw | extract | rex "\"sessionId\"\:\"(?<SessionID>.*?)\"\,\"clientTransactionId\"\:\"(?<Client... See more...
@tscroggins Perfect. Worked smoothly. I took a long way as follows: | spath | rename message as _raw | extract | rex "\"sessionId\"\:\"(?<SessionID>.*?)\"\,\"clientTransactionId\"\:\"(?<ClientTransactionId>.*?)\"\,\"transactionId\""
Hi all,  I am a bit of a newbie here, and am trying to setup HEC on splink cloud, however the URL I have created following the event collector documentation ( https://docs.splunk.com/Documentation/S... See more...
Hi all,  I am a bit of a newbie here, and am trying to setup HEC on splink cloud, however the URL I have created following the event collector documentation ( https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/Data/UsetheHTTPEventCollector)  doesn't appear to be working. Looking at the HEC dashboard occasionally there is some activity showing, but it tells me the URL is incorrect. I have tried numerous changes to the URL, and followed tons of advice on here, but nothing appears to be working. I am clearly missing something, and would really appreciate some guidance. https://http-inputs-myhostname.splunkcloud.com:443//services/collector/event/authorisationheader I have tried replacing event for raw, changed the port, although using a Splunk Cloud Platform instance rather then free trial. I have removed SSL and re-enabled. I would be very grateful of any advice and support here. Thank you    
Still the key_value part should be a proper object containing key-value pairs, not an embedded string. That makes no sense. Fix your data. This is incredibly common and in most cases, outside th... See more...
Still the key_value part should be a proper object containing key-value pairs, not an embedded string. That makes no sense. Fix your data. This is incredibly common and in most cases, outside the control of the destination. In the Logstash/Elasticsearch world, I'd parse the message field with a grok filter/processor followed by a json filter/processor to parse key_value into a JSON object. ("Elastic" translates to "overhead," but it's really just a trade-off relative to how Lucene works.) In the Splunk world, I'd leave it as is and use search-time field extractions, field aliases, etc. and accelerated data models.
Hello, I have a distributed Splunk architecture and I am trying to optimise/trim the received logs using Ingest actions features. However, I have the below error : - I tried to create new rule set ... See more...
Hello, I have a distributed Splunk architecture and I am trying to optimise/trim the received logs using Ingest actions features. However, I have the below error : - I tried to create new rule set on the Heavey forwarder and indexer , but it returned with the error message "this endpoint will reject all requests until pass4SymmKey has been properly set." So, I want to check where should I implement this feature on Indexer or HF? and is there any pre-request to implement it?
Please don't create duplicate threads on the same subject. You already asked about parsing HX events here https://community.splunk.com/t5/Deployment-Architecture/forwarded-events-and-field-extraction... See more...
Please don't create duplicate threads on the same subject. You already asked about parsing HX events here https://community.splunk.com/t5/Deployment-Architecture/forwarded-events-and-field-extraction/m-p/698436#M28532
First and foremost - this is not a json within a json. This is a json object embedded within something that resembles json  but is syntactically incorrect. I suspect you're getting that data with som... See more...
First and foremost - this is not a json within a json. This is a json object embedded within something that resembles json  but is syntactically incorrect. I suspect you're getting that data with some filebeat, logstash or similar tool. I'd try to fix the format to be a proper well-formed json. Then it "just works". EDIT: Ok, that's what you get when you're not posting raw data, but rather the preformatted output from webui. Still the key_value part should be a proper object containing key-value pairs, not an embedded string. That makes no sense. Fix your data.
An app is just a bunch of files. For field extractions they just contain a bunch of props/transforms settings. I'd still consider switching to a more sane reporting format first. For example - json.
Hi @jip31, In-browser performance monitoring is usually done with client-side JavaScript instrumentation. The instrumentation code would be included in your web site's source code. In Splunk Observ... See more...
Hi @jip31, In-browser performance monitoring is usually done with client-side JavaScript instrumentation. The instrumentation code would be included in your web site's source code. In Splunk Observability, this would be done with Splunk RUM. See e.g. https://docs.splunk.com/observability/en/gdi/get-data-in/rum/rum-instrumentation.html and https://docs.splunk.com/observability/en/gdi/get-data-in/rum/browser/get-browser-data-in.html. In AppDynamics, this would be with End User Monitoring. See e.g. https://docs.appdynamics.com/appd/24.x/latest/en/end-user-monitoring/browser-monitoring/browser-real-user-monitoring/inject-the-javascript-agent. Splunk Universal Forwarder includes the functionality required to index Windows event logs. Out of the box, the forwarder can be configured to collect Application, Forwarded Events, Security, Setup, and System event logs. See https://docs.splunk.com/Documentation/Forwarder/9.3.0/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller. Splunk Add-on for Microsoft Windows provides a comprehensive set of inputs, props, etc. for many Windows operating system, service, and application event logs. Start with https://splunkbase.splunk.com/app/742 and https://docs.splunk.com/Documentation/AddOns/released/Windows/AbouttheSplunkAdd-onforWindows.  
Hi @Codie, If your _raw value looks like this: { "@timestamp": "2024-09-05T10:59:34.826855417+10:00", "appName": "TestApp", "environment": "UAT", "ivUser": "Ashish", "level": "INFO", "l... See more...
Hi @Codie, If your _raw value looks like this: { "@timestamp": "2024-09-05T10:59:34.826855417+10:00", "appName": "TestApp", "environment": "UAT", "ivUser": "Ashish", "level": "INFO", "logger": "com.app.login", "message": "New user state created - state_id: XXXX-YYYYYY, key_id: twoFactorAuth, key_value: {\"tamSessionIndex\":\"1d1ad722-XXXX-11ef-8a2b-005056b70cf5\",\"devicePrint\":\"DDDDDDDDDDD\",\"createdAt\":\"2099-09-05T00:59:34.734404799Z\",\"updatedAt\":\"2099-09-05T00:59:34.734404799Z\",\"clientSessionId\":\"ppppppppppppp\",\"sessionId\":\"WWWWWWWWW\",\"clientTransactionId\":\"8fd2353d-d609-XXXX-52i6-2e1dc12359m4\",\"transactionId\":\"9285-:f18c10db191:XXXXXXXX_TRX\",\"twoFaResult\":\"CHALLENGE\",\"newDevice\":true,\"newLocation\":false,\"overseas\":true} with TTL: 46825", "parentId": "", "spanId": "14223cXXXX6d63d5", "tamSessionIndex": "1d1ad722-6b22-11ef-8a2b-XXXXXXX", "thread": "https-jsse-nio-XXXX-exec-6", "traceId": "66d90275ecc565aa61XXXXXXXX02f5815" } You should have a message field with value: New user state created - state_id: XXXX-YYYYYY, key_id: twoFactorAuth, key_value: {"tamSessionIndex":"1d1ad722-XXXX-11ef-8a2b-005056b70cf5","devicePrint":"DDDDDDDDDDD","createdAt":"2099-09-05T00:59:34.734404799Z","updatedAt":"2099-09-05T00:59:34.734404799Z","clientSessionId":"ppppppppppppp","sessionId":"WWWWWWWWW","clientTransactionId":"8fd2353d-d609-XXXX-52i6-2e1dc12359m4","transactionId":"9285-:f18c10db191:XXXXXXXX_TRX","twoFaResult":"CHALLENGE","newDevice":true,"newLocation":false,"overseas":true} with TTL: 46825 The key_value data may vary, and you'll need to adjust the regular expression as needed, but as a starting point, you can extract key_value (as message_key_value) and clientTransactionId cleanly in SPL using: | rex field=message "key_value: (?<message_key_value>\{.*\}) with TTL:" | spath input=message_key_value or | rex field=message "key_value: (?<message_key_value>\{.*\}) with TTL:" | eval clientTransactionId=json_extract(json(message_key_value), "clientTransactionId") or | eval clientTransactionId=json_extract(json(replace(message, ".* key_value: (\{.*\}) with TTL: .*", "\\1")), "clientTransactionId") or other variations.
If you want to provide both en-US and it-IT translations in your app, check https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/TranslateSplunk for the localization process.
Hi there, the same story is true for me. Actually after updating ESCU to 4330. Not only for custom correlation search rules but for cloned rules. before that everything was ok! when u clone a bui... See more...
Hi there, the same story is true for me. Actually after updating ESCU to 4330. Not only for custom correlation search rules but for cloned rules. before that everything was ok! when u clone a built-in rule e.g "Excessive Failed Logins" to something like "Excessive Failed Logins- Custom", in Security Posture's Top Notable Events dashboard pane it appears like "Access - Excessive Failed Logins- Custom - Rule" and when u click on it to open in incident review, it doesn't filter out this as selected source but all incidents are listed as if no filter is selected.
This was really bothering me ... and I isolated the cause to browser-local storage managed by SplunkWeb. Keys beginning with splunk-appnav cache nav menus, e.g.: splunk-appnav:localized_nav:admin:en... See more...
This was really bothering me ... and I isolated the cause to browser-local storage managed by SplunkWeb. Keys beginning with splunk-appnav cache nav menus, e.g.: splunk-appnav:localized_nav:admin:en-US { "nav": [ { "label": "Localized Nav Dashboard", "uri": "/en-US/app/localized_nav/localized_nav_dashboard", "viewName": "localized_nav_dashboard", "isDefault": true }, { "label": "More...", "submenu": [ { "label": "Search", "uri": "/en-US/app/localized_nav/search", "viewName": "search" }, { "label": "Analytics", "uri": "/en-US/app/localized_nav/analytics_workspace", "viewName": "analytics_workspace" }, { "label": "Datasets", "uri": "/en-US/app/localized_nav/datasets", "viewName": "datasets" }, { "label": "Reports", "uri": "/en-US/app/localized_nav/reports", "viewName": "reports" }, { "label": "Alerts", "uri": "/en-US/app/localized_nav/alerts", "viewName": "alerts" }, { "label": "Dashboards", "uri": "/en-US/app/localized_nav/dashboards", "viewName": "dashboards" } ] } ], "color": "#ff0000", "label": "Localized Nav", "searchView": "search", "lastModified": 1725807491341 } Switching locales generates a new cached menu under a new key: splunk-appnav:localized_nav:admin:it-IT If you're having issues with nav menus, you may be able to resolve them with one of the following: Clear browser cache Clear site cookies/storage (linked in Chromium-based browsers) Clear local storage entries in browser dev tools
Thank you for the kind reply! Where can I find SplunkCloud rootCA ?
That a whole different ball game! Are you using Studio or Classic Simple XML dashboards?
Hi, I would like to extract a field from a JSON logs which is in a prettier format already. I would like to extract a field named "clientTransactionId" from below sample data. { [-]    @timestamp:... See more...
Hi, I would like to extract a field from a JSON logs which is in a prettier format already. I would like to extract a field named "clientTransactionId" from below sample data. { [-]    @timestamp: 2024-09-05T10:59:34.826855417+10:00    appName: TestApp    environment: UAT    ivUser: Ashish    level: INFO    logger: com.app.login    message: New user state created - state_id: XXXX-YYYYYY, key_id: twoFactorAuth, key_value: {"tamSessionIndex":"1d1ad722-XXXX-11ef-8a2b-005056b70cf5","devicePrint":"DDDDDDDDDDD","createdAt":"2099-09-05T00:59:34.734404799Z","updatedAt":"2099-09-05T00:59:34.734404799Z","clientSessionId":"ppppppppppppp","sessionId":"WWWWWWWWW","clientTransactionId":"8fd2353d-d609-XXXX-52i6-2e1dc12359m4","transactionId":"9285-:f18c10db191:XXXXXXXX_TRX","twoFaResult":"CHALLENGE","newDevice":true,"newLocation":false,"overseas":true} with TTL: 46825    parentId:    spanId: 14223cXXXX6d63d5    tamSessionIndex: 1d1ad722-6b22-11ef-8a2b-XXXXXXX    thread: https-jsse-nio-XXXX-exec-6    traceId: 66d90275ecc565aa61XXXXXXXX02f5815 }
Thank you. On that note how can I highlight in red or something that specific timestamp that was used to fill down the rest of the rows below etc. i.e a way to differentiate it from the rest which wa... See more...
Thank you. On that note how can I highlight in red or something that specific timestamp that was used to fill down the rest of the rows below etc. i.e a way to differentiate it from the rest which was filled using filldown?
could you help me with examples please ? cuz i tried to find an app for Trellix hx end-point security but i can't find it    THANKS
It looks like you may be using a default extract which takes name=value and the value is being terminated at the next space. You will probably have to do some field specific extractions to override t... See more...
It looks like you may be using a default extract which takes name=value and the value is being terminated at the next space. You will probably have to do some field specific extractions to override these defaults.
Line 3 creates a string in t2 so line 4 should be parsing the string strptime not strftime