All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I have a distributed Splunk architecture and I am trying to optimise/trim the received logs using Ingest actions features. However, I have the below error : - I tried to create new rule set ... See more...
Hello, I have a distributed Splunk architecture and I am trying to optimise/trim the received logs using Ingest actions features. However, I have the below error : - I tried to create new rule set on the Heavey forwarder and indexer , but it returned with the error message "this endpoint will reject all requests until pass4SymmKey has been properly set." So, I want to check where should I implement this feature on Indexer or HF? and is there any pre-request to implement it?
Please don't create duplicate threads on the same subject. You already asked about parsing HX events here https://community.splunk.com/t5/Deployment-Architecture/forwarded-events-and-field-extraction... See more...
Please don't create duplicate threads on the same subject. You already asked about parsing HX events here https://community.splunk.com/t5/Deployment-Architecture/forwarded-events-and-field-extraction/m-p/698436#M28532
First and foremost - this is not a json within a json. This is a json object embedded within something that resembles json  but is syntactically incorrect. I suspect you're getting that data with som... See more...
First and foremost - this is not a json within a json. This is a json object embedded within something that resembles json  but is syntactically incorrect. I suspect you're getting that data with some filebeat, logstash or similar tool. I'd try to fix the format to be a proper well-formed json. Then it "just works". EDIT: Ok, that's what you get when you're not posting raw data, but rather the preformatted output from webui. Still the key_value part should be a proper object containing key-value pairs, not an embedded string. That makes no sense. Fix your data.
An app is just a bunch of files. For field extractions they just contain a bunch of props/transforms settings. I'd still consider switching to a more sane reporting format first. For example - json.
Hi @jip31, In-browser performance monitoring is usually done with client-side JavaScript instrumentation. The instrumentation code would be included in your web site's source code. In Splunk Observ... See more...
Hi @jip31, In-browser performance monitoring is usually done with client-side JavaScript instrumentation. The instrumentation code would be included in your web site's source code. In Splunk Observability, this would be done with Splunk RUM. See e.g. https://docs.splunk.com/observability/en/gdi/get-data-in/rum/rum-instrumentation.html and https://docs.splunk.com/observability/en/gdi/get-data-in/rum/browser/get-browser-data-in.html. In AppDynamics, this would be with End User Monitoring. See e.g. https://docs.appdynamics.com/appd/24.x/latest/en/end-user-monitoring/browser-monitoring/browser-real-user-monitoring/inject-the-javascript-agent. Splunk Universal Forwarder includes the functionality required to index Windows event logs. Out of the box, the forwarder can be configured to collect Application, Forwarded Events, Security, Setup, and System event logs. See https://docs.splunk.com/Documentation/Forwarder/9.3.0/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller. Splunk Add-on for Microsoft Windows provides a comprehensive set of inputs, props, etc. for many Windows operating system, service, and application event logs. Start with https://splunkbase.splunk.com/app/742 and https://docs.splunk.com/Documentation/AddOns/released/Windows/AbouttheSplunkAdd-onforWindows.  
Hi @Codie, If your _raw value looks like this: { "@timestamp": "2024-09-05T10:59:34.826855417+10:00", "appName": "TestApp", "environment": "UAT", "ivUser": "Ashish", "level": "INFO", "l... See more...
Hi @Codie, If your _raw value looks like this: { "@timestamp": "2024-09-05T10:59:34.826855417+10:00", "appName": "TestApp", "environment": "UAT", "ivUser": "Ashish", "level": "INFO", "logger": "com.app.login", "message": "New user state created - state_id: XXXX-YYYYYY, key_id: twoFactorAuth, key_value: {\"tamSessionIndex\":\"1d1ad722-XXXX-11ef-8a2b-005056b70cf5\",\"devicePrint\":\"DDDDDDDDDDD\",\"createdAt\":\"2099-09-05T00:59:34.734404799Z\",\"updatedAt\":\"2099-09-05T00:59:34.734404799Z\",\"clientSessionId\":\"ppppppppppppp\",\"sessionId\":\"WWWWWWWWW\",\"clientTransactionId\":\"8fd2353d-d609-XXXX-52i6-2e1dc12359m4\",\"transactionId\":\"9285-:f18c10db191:XXXXXXXX_TRX\",\"twoFaResult\":\"CHALLENGE\",\"newDevice\":true,\"newLocation\":false,\"overseas\":true} with TTL: 46825", "parentId": "", "spanId": "14223cXXXX6d63d5", "tamSessionIndex": "1d1ad722-6b22-11ef-8a2b-XXXXXXX", "thread": "https-jsse-nio-XXXX-exec-6", "traceId": "66d90275ecc565aa61XXXXXXXX02f5815" } You should have a message field with value: New user state created - state_id: XXXX-YYYYYY, key_id: twoFactorAuth, key_value: {"tamSessionIndex":"1d1ad722-XXXX-11ef-8a2b-005056b70cf5","devicePrint":"DDDDDDDDDDD","createdAt":"2099-09-05T00:59:34.734404799Z","updatedAt":"2099-09-05T00:59:34.734404799Z","clientSessionId":"ppppppppppppp","sessionId":"WWWWWWWWW","clientTransactionId":"8fd2353d-d609-XXXX-52i6-2e1dc12359m4","transactionId":"9285-:f18c10db191:XXXXXXXX_TRX","twoFaResult":"CHALLENGE","newDevice":true,"newLocation":false,"overseas":true} with TTL: 46825 The key_value data may vary, and you'll need to adjust the regular expression as needed, but as a starting point, you can extract key_value (as message_key_value) and clientTransactionId cleanly in SPL using: | rex field=message "key_value: (?<message_key_value>\{.*\}) with TTL:" | spath input=message_key_value or | rex field=message "key_value: (?<message_key_value>\{.*\}) with TTL:" | eval clientTransactionId=json_extract(json(message_key_value), "clientTransactionId") or | eval clientTransactionId=json_extract(json(replace(message, ".* key_value: (\{.*\}) with TTL: .*", "\\1")), "clientTransactionId") or other variations.
If you want to provide both en-US and it-IT translations in your app, check https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/TranslateSplunk for the localization process.
Hi there, the same story is true for me. Actually after updating ESCU to 4330. Not only for custom correlation search rules but for cloned rules. before that everything was ok! when u clone a bui... See more...
Hi there, the same story is true for me. Actually after updating ESCU to 4330. Not only for custom correlation search rules but for cloned rules. before that everything was ok! when u clone a built-in rule e.g "Excessive Failed Logins" to something like "Excessive Failed Logins- Custom", in Security Posture's Top Notable Events dashboard pane it appears like "Access - Excessive Failed Logins- Custom - Rule" and when u click on it to open in incident review, it doesn't filter out this as selected source but all incidents are listed as if no filter is selected.
This was really bothering me ... and I isolated the cause to browser-local storage managed by SplunkWeb. Keys beginning with splunk-appnav cache nav menus, e.g.: splunk-appnav:localized_nav:admin:en... See more...
This was really bothering me ... and I isolated the cause to browser-local storage managed by SplunkWeb. Keys beginning with splunk-appnav cache nav menus, e.g.: splunk-appnav:localized_nav:admin:en-US { "nav": [ { "label": "Localized Nav Dashboard", "uri": "/en-US/app/localized_nav/localized_nav_dashboard", "viewName": "localized_nav_dashboard", "isDefault": true }, { "label": "More...", "submenu": [ { "label": "Search", "uri": "/en-US/app/localized_nav/search", "viewName": "search" }, { "label": "Analytics", "uri": "/en-US/app/localized_nav/analytics_workspace", "viewName": "analytics_workspace" }, { "label": "Datasets", "uri": "/en-US/app/localized_nav/datasets", "viewName": "datasets" }, { "label": "Reports", "uri": "/en-US/app/localized_nav/reports", "viewName": "reports" }, { "label": "Alerts", "uri": "/en-US/app/localized_nav/alerts", "viewName": "alerts" }, { "label": "Dashboards", "uri": "/en-US/app/localized_nav/dashboards", "viewName": "dashboards" } ] } ], "color": "#ff0000", "label": "Localized Nav", "searchView": "search", "lastModified": 1725807491341 } Switching locales generates a new cached menu under a new key: splunk-appnav:localized_nav:admin:it-IT If you're having issues with nav menus, you may be able to resolve them with one of the following: Clear browser cache Clear site cookies/storage (linked in Chromium-based browsers) Clear local storage entries in browser dev tools
Thank you for the kind reply! Where can I find SplunkCloud rootCA ?
That a whole different ball game! Are you using Studio or Classic Simple XML dashboards?
Hi, I would like to extract a field from a JSON logs which is in a prettier format already. I would like to extract a field named "clientTransactionId" from below sample data. { [-]    @timestamp:... See more...
Hi, I would like to extract a field from a JSON logs which is in a prettier format already. I would like to extract a field named "clientTransactionId" from below sample data. { [-]    @timestamp: 2024-09-05T10:59:34.826855417+10:00    appName: TestApp    environment: UAT    ivUser: Ashish    level: INFO    logger: com.app.login    message: New user state created - state_id: XXXX-YYYYYY, key_id: twoFactorAuth, key_value: {"tamSessionIndex":"1d1ad722-XXXX-11ef-8a2b-005056b70cf5","devicePrint":"DDDDDDDDDDD","createdAt":"2099-09-05T00:59:34.734404799Z","updatedAt":"2099-09-05T00:59:34.734404799Z","clientSessionId":"ppppppppppppp","sessionId":"WWWWWWWWW","clientTransactionId":"8fd2353d-d609-XXXX-52i6-2e1dc12359m4","transactionId":"9285-:f18c10db191:XXXXXXXX_TRX","twoFaResult":"CHALLENGE","newDevice":true,"newLocation":false,"overseas":true} with TTL: 46825    parentId:    spanId: 14223cXXXX6d63d5    tamSessionIndex: 1d1ad722-6b22-11ef-8a2b-XXXXXXX    thread: https-jsse-nio-XXXX-exec-6    traceId: 66d90275ecc565aa61XXXXXXXX02f5815 }
Thank you. On that note how can I highlight in red or something that specific timestamp that was used to fill down the rest of the rows below etc. i.e a way to differentiate it from the rest which wa... See more...
Thank you. On that note how can I highlight in red or something that specific timestamp that was used to fill down the rest of the rows below etc. i.e a way to differentiate it from the rest which was filled using filldown?
could you help me with examples please ? cuz i tried to find an app for Trellix hx end-point security but i can't find it    THANKS
It looks like you may be using a default extract which takes name=value and the value is being terminated at the next space. You will probably have to do some field specific extractions to override t... See more...
It looks like you may be using a default extract which takes name=value and the value is being terminated at the next space. You will probably have to do some field specific extractions to override these defaults.
Line 3 creates a string in t2 so line 4 should be parsing the string strptime not strftime
Hello members   i'm facing problems regarding parsing the event details on splunk i have forwarded the events from HF to indexers and now it's able to search but i'm facing issues with field extrac... See more...
Hello members   i'm facing problems regarding parsing the event details on splunk i have forwarded the events from HF to indexers and now it's able to search but i'm facing issues with field extractions and event details because the messages are truncated for example    if i have something like this sample event    CEF:0|fireeye|HX|4.8.0|IOC Hit Found|IOC Hit Found|10|rt=Jul 23 2019 16:54:24 UTC dvchost=fireeye.mps.test categoryDeviceGroup=/IDS categoryDeviceType=Forensic Investigation categoryObject=/Host   the categoryDeviceType parameter is truncated in field extraction so it display only forensic and other string is truncated   so can any one please help on this matter   my props.conf is   [trellix] category = Custom pulldown_type = 1 TIME_FORMAT = ^<\d+> EVAL-_time = strftime(_time, "%Y %b %d %H:%M:%S") TIME_PREFIX = %b %d %H:%M:%S
That doesn't always work. I cant seem to find a good solution for this type of problem either. I can't convert this timestamp for subtraction purposes for example (see how t3 column is empty?):
is there another options for parsing like editing props.conf since i don't want to add new app    is there any possibility for this type of events to just edit props.conf?   my props.conf [trell... See more...
is there another options for parsing like editing props.conf since i don't want to add new app    is there any possibility for this type of events to just edit props.conf?   my props.conf [trellix] category = Custom pulldown_type = 1 TIME_FORMAT = ^<\d+> EVAL-_time = strftime(_time, "%Y %b %d %H:%M:%S") TIME_PREFIX = %b %d %H:%M:%S
Hi @jmartens , Very sorry for the inconvenience. Engineering is aware of this (reference: SPL-258019). They have come up with a fix, which excludes the 2 internal users, 'nobody' and 'splunk-system... See more...
Hi @jmartens , Very sorry for the inconvenience. Engineering is aware of this (reference: SPL-258019). They have come up with a fix, which excludes the 2 internal users, 'nobody' and 'splunk-system', from the warning message. The fix will most likely be added to the next 9.1.x version after 9.1.6 and the next 9.2.x version after 9.2.3, respectively.