All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I recently had cause to ingest Oracle Unified Directory logs in ODL format. I'm performing pretty simple file-based ingestion and didn't want to go down the path of DB connect, etc. so it made sense ... See more...
I recently had cause to ingest Oracle Unified Directory logs in ODL format. I'm performing pretty simple file-based ingestion and didn't want to go down the path of DB connect, etc. so it made sense to write my own parsing logic. I didn't have a lot of luck finding things precisely relevant to my logs and I could have benefited from the concrete example I'll lay out below, so here it is, hopefully it will help others. This specific example is OUD logs, but it's relevant to any arbitrary _KEY_n, _VAL_n extraction in Splunk. Log format The ODL logs are mostly well structured, but the format is a little odd to work with. Most fields are bracketed [FIELD_1] The log starts with "header" fields which are fixed and contain single values The next section of the log has an arbitrary number of bracketed key value pairs: [KEY: VALUE] The final piece is a non-structured string that may also contain brackets and colons Below is a small sample of a few different types of logs: [2016-12-30T11:08:46.216-05:00] [ESSBASE0] [NOTIFICATION:16] [TCP-59] [TCP] [ecid: 1482887126970,0] [tid: 140198389143872] Connected from [::ffff:999.999.99.999] [2016-12-30T11:08:27.60-05:00] [ESSBASE0] [NOTIFICATION:16] [AGENT-1001] [AGENT] [ecid: 1482887126970,0] [tid: 140198073563456] Received client request: Clear Application/Database (from user [sampleuser@Native Directory]) [2016-12-30T11:08:24.302-05:00] [PLN3] [NOTIFICATION:16] [REQ-91] [REQ] [ecid: 148308120489,0] [tid: 140641102035264] [DBNAME: SAMPLE] Received Command [SetAlias] from user [sampleuser@Native Directory] [2016-12-30T11:08:26.932-05:00] [PLN3] [NOTIFICATION:16] [SSE-82] [SSE] [ecid: 148308120489,0] [tid: 140641102035264] [DBNAME: SAMPLE] Spreadsheet Extractor Big Block Allocs -- Dyn.Calc.Cache : [202] non-Dyn.Calc.Cache : [0] [2025-02-07T15:18:35.014-05:00] [OUD] [TRACE] [OUD-24641549] [PROTOCOL] [host: testds01.redacted.fake] [nwaddr: redacted] [tid: 200] [userId: ldap] [ecid: 0000PJY46q64Usw5sFt1iX1bdZJL0003QL,0:1] [category: RES] [conn: 1285] [op: 0] [msgID: 1] [result: 0] [authDN: uid=redacted,ou=redacted,o=redacted,c=redacted] [etime: 0] BIND [2025-02-07T15:18:35.014-05:00] [OUD] [TRACE] [OUD-24641548] [PROTOCOL] [host: testds01.redacted.fake] [nwaddr: redacted] [tid: 200] [userId: ldap] [ecid: 0000PJY46q64Usw5sFt1iX1bdZJL0003QK,0:1] [category: REQ] [conn: 1285] [op: 0] [msgID: 1] [bindType: SIMPLE] [dn: uid=redacted,ou=redacted,o=redacted,c=redacted] BIND Configuration files We opted to use the sourcetype "odl:oud". This allows future extension into other ODL formatted logs. props.conf Note the 3-part REPORT processing. This utilizes regexes in transforms.conf to process the headers, the key-value pairs, and the trailing message. This could then be extended to utilize portions of that extraction for other log types that fall under ODL format. [odl:oud] REPORT-oudparse = extractOUDheader, extractODLkv, extractOUDmessage  transforms.conf This was the piece I could have used some concrete examples of. These three report extractions allow a pretty flexible ingestion of all parts of the logs. The key here was separating the _KEY_1 _VAL_1 extraction into its own process with the REPEAT_MATCH flag set # extract fixed leading values from Oracle Unified Directory log message [extractOUDheader] REGEX = ^\[(?<timestamp>\d{4}[^\]]+)\] \[(?<organization_id>[^\]]+)\] \[(?<message_type>[^\]]+)\] \[(?<message_id>[^\]]+)\] \[(?<component>[^\]]+?)\] # extract N number of key-value pairs from Oracle Diagnostic Logging log body [extractODLkv] REGEX = \[(?<_KEY_1>[^:]+?): (?<_VAL_1>[^\]]+?)\] REPEAT_MATCH = true # extract trailing, arbitrary message text from Oracle Unified Directory log [extractOUDmessage] REGEX = \[[^:]+: [^\]]+\] (?<message>[^\[].*)$.    The final regex there looks for a preceding key-value pair, NOT followed by a new square bracket, with any arbitrary characters thereafter to end the line. I initially tried to make one regex to perform all of this, which doesn't really work without being very prescriptive in the structure. With an unknown number of key-value pairs, this is not ideal and this solution seemed much more Splunk-esque. I hope this helps someone else!
Hi Experts, The file ACF2DS_Data.csv contains columns including TIMESTAMP, DS_NAME, and JOBNAME. I need to match the DS_NAME column from this file with the LKUP_DSN column in DSN_LKUP.csv to obtain... See more...
Hi Experts, The file ACF2DS_Data.csv contains columns including TIMESTAMP, DS_NAME, and JOBNAME. I need to match the DS_NAME column from this file with the LKUP_DSN column in DSN_LKUP.csv to obtain the corresponding events from ACF2DS_Data.csv. The query provided below is not working as expected. Could you please assist me in resolving the issue with the query? source="*ACF2DS_Data.csv" index="idxmainframe" earliest=0 latest=now [search source="*DSN_LKUP.csv" index="idxmainframe" earliest=0 latest=now | eval LKUP_DSN = "%".LKUP_DSN."%" | where like(DS_NAME,LKUP_DSN) | table DS_NAME] | table TIMESTAMP, DS_NAME, JOBNAME Thanks, Ravikumar
    14/01/2025 15/01/2025 16/01/2025 17/01/2025 18/01/2025 19/01/2025 20/01/2025 21/01/2025 22/01/2025 05/01/2025                     06/01/2025                 ... See more...
    14/01/2025 15/01/2025 16/01/2025 17/01/2025 18/01/2025 19/01/2025 20/01/2025 21/01/2025 22/01/2025 05/01/2025                     06/01/2025                     07/01/2025                     08/01/2025                     09/01/2025                     10/01/2025 x                   11/01/2025 x                   12/01/2025 x                   13/01/2025 x                   14/01/2025                     15/01/2025                     16/01/2025 x                   17/01/2025                     18/01/2025 x                   19/01/2025 x                   20/01/2025 x                   21/01/2025 x                   22/01/2025                     Here is a simple table with dates and whether the user as accessed the account (marked with an 'x') Across the top are the date of when the report is run looking back 10 days including the day the report is run. What do you expect the count to be for each of those days? Do you expect a single count at the end for the whole period? What does that count represent and why? Please fill in all the detail.
If we follow the cycle of 10 as you said (N) and 4 as the number of days (M) (which is also the number of times, because the same person or department accessing an account on the same day is recorded ... See more...
If we follow the cycle of 10 as you said (N) and 4 as the number of days (M) (which is also the number of times, because the same person or department accessing an account on the same day is recorded as 1 day) Assuming that in the first 10 days of today, the same person and department accessed an account on the same day, and the number of visits per day is counted as 1, and the final value is greater than 4. To achieve this, if I enlarge the time interval, I will append the results of each 10 day period separately.
TLDR; check your file permissions.   A bit late to the party, but this one had me swearing for a a few hours. I work for an MSP, and manage several separate Splunk Enterprise environments. After th... See more...
TLDR; check your file permissions.   A bit late to the party, but this one had me swearing for a a few hours. I work for an MSP, and manage several separate Splunk Enterprise environments. After the latest upgrade to 9.3.2, I noticed that for a few of them the DS was "broken".  Checked this post and others and went over configs side-by-side to see if there were any differences in outputs.conf, distsearch.conf, indexes.conf etc. There shouldn't really be many of those, since almost all config is done through puppet, and there are not so many reasons to change settings on an individual server basis. All types of internal logs are forwarded to the indexer cluster. Always.  Turns out that the upgrade process (ours or splunk's) had left /opt/splunk/var/log/client_events owned by root, with 700-permissions. No wonder that the files weren't even written to begin with... I suspect  on those environments that did work, I had out of habit run a chown -R splunk:splunk to ensure that I hadn't messed up something somewhere. Lesson: check the obvious stuff first.
Why is  Start time: 7, end time: 16, name, department, account number: 5 when you don't know on the 16th that this is the start of another set of 4 consecutive accesses? What would you get if it w... See more...
Why is  Start time: 7, end time: 16, name, department, account number: 5 when you don't know on the 16th that this is the start of another set of 4 consecutive accesses? What would you get if it was accessed on 10, 11, 12, 13, 16, 18, 19, 20, 21?
If the same person and department access the same account for 2 consecutive 4-day periods, they will receive: Start time: 5, End time: 14, Name, Department, Account, Number of occurrences: 4 Start ti... See more...
If the same person and department access the same account for 2 consecutive 4-day periods, they will receive: Start time: 5, End time: 14, Name, Department, Account, Number of occurrences: 4 Start time: 6, end time: 15, name, department, account number: 4 Start time: 7, end time: 16, name, department, account number: 5 Start time: 8, end time: 17, name, department, account number: 6 Start time: 9, end time: 18, name, department, account number: 7 Start time: 10, end time: 19, name, department, account number: 8 Start time: 11, end time: 20, name, department, account number: 7 Start time: 12, end time: 21, name, department, account number: 6 Starting time: 13, ending time: 22, name, department, account number: 5 Starting time: 14, ending time: 22, name, department, account number: 4 Start time: 15, end time: 23, name, department, account number: 4 Start time: 16, end time: 24, name, department, account number: 4 Because our data is collected today and yesterday, according to what you said, 10 is the cycle (N) and 4 is the number of days (M) (also the number of times, because the same person or department accessing one account on the same day is recorded as 1 day)
One more comment which is mandatory to know. You cannot manage DS itself with DS functionality! Don't even try it!!! For that reason it's good to use dedicated DS server if/when you have several cl... See more...
One more comment which is mandatory to know. You cannot manage DS itself with DS functionality! Don't even try it!!! For that reason it's good to use dedicated DS server if/when you have several clients to manage. DS can be a physical or virtual server. It's no mater if there are enough resources for it. Currently you can even make pool of DSs as working like one. If you are Splunk Cloud customer you can order dedicated DS license from Support by creating a service ticket. I have never try if I can do this also as Splunk Enterprise customer too? After 9.2 there are some new configuration options what you must to do in DS especially if/when you are forwarding it's log to centralized indexers.
OK so what would you get if there were two periods of 4 consecutive days (10, 11, 12 and 13, and 16, 17, 18 and 19)?
Yes, what I want to achieve is to count the alarm results for every first 7 days plus 1 day
OK so the number of "visits" is because the 10 day periods 6-15, 7-16, 8-17, 9-18 and 10-19 all contain the same period of 4 consecutive visits (10, 11, 12 and 13)?
Because your condition is that M is 4, I will sound an alarm when the user accesses the same account more than 4 times in a row
Ahhh sorry @harishsplunk7 - I misread! I've rearranged the query a bit now, how does this look? | rest splunk_server=local "/servicesNS/-/-/data/ui/views" | rename title as dashboard, eai:acl.app ... See more...
Ahhh sorry @harishsplunk7 - I misread! I've rearranged the query a bit now, how does this look? | rest splunk_server=local "/servicesNS/-/-/data/ui/views" | rename title as dashboard, eai:acl.app as app | fields dashboard app | eval isDashboard=1 | append [ search index=_internal sourcetype=splunkd_ui_access earliest=-90d@d uri="*/data/ui/views/*" | rex field=uri "/servicesNS/(?<user>[^/]+)/(?<app>[^/]+)/data/ui/views/(?<dashboard>[^\.?/\s]+)" | search NOT dashboard IN ("search", "home", "alert", "lookup_edit", "@go", "data_lab", "dataset", "datasets", "alerts", "dashboards", "reports") | stats count as accessed by app, dashboard ] | stats sum(accessed) as accessed, values(isDashboard) as isDashboard by app, dashboard | fillnull accessed value=0 | search isDashboard=1 accessed=0 Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Please explain why the number of visits is 5 when the user has only accessed the account for the first 4 days (presumably 10th, 11th, 12th and 13th)?
hmm, Is your ES rule looking at All Time? If so, does it need to? This could chew up quite a bit of resource.
This one actually fixed the issue been working on this over a day without a solution
Hi Peers. How is the compatibility between OTel collector and AppDynamics, is it efficient and recommendable? If we use Otel collector for data exporting to AppD..is it still required to use AppD a... See more...
Hi Peers. How is the compatibility between OTel collector and AppDynamics, is it efficient and recommendable? If we use Otel collector for data exporting to AppD..is it still required to use AppD agents also? How the Licensing will work if we use Otel for data exporting to AppD? Otel collector is compatible with both on premise and SaaS Environment of AppD?. Thanks
Hi @Praz_123 , if you'r speaking of ulimit of Splunk Servers, you can use the Monitoring Console health Check. If you're speking of Forwarders (Universal or Heavy it's the same), there's no direct ... See more...
Hi @Praz_123 , if you'r speaking of ulimit of Splunk Servers, you can use the Monitoring Console health Check. If you're speking of Forwarders (Universal or Heavy it's the same), there's no direct solution and you should use the solution from @livehybrid: a shall script input (to insert in a custom add-on) that extract this value and sends it to the Indexers. Ciao. Giuseppe
Hi @raleighj , I suppose that you're using Enterprise Security, if yes, see the Manager Security Posture Dashboard to have these information. Ciao. Giuseppe
Hi @kiran_panchavat    Thanks for your response. Which server contains the `passwords.conf` file for Qualys TA (TA-QualysCloudPlatform)? I couldn't find it on the Heavy Forwarder (HF).