All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You're thinking in wrong order. That's why I'm saying it's not possible with Splunk alone. If you don't know this one, it's one of the mainstays of understanding of Splunk indexing process- https://... See more...
You're thinking in wrong order. That's why I'm saying it's not possible with Splunk alone. If you don't know this one, it's one of the mainstays of understanding of Splunk indexing process- https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 As you can see, line breaking is one of the absolute first things happening with the input stream. You can't "backtrack" your way within the ingestion pipeline to do SEDCMD before line breaking. And, as I wrote already, it's really a very bad idea to tackle structured data with regexes.
I have a KPI alert using adhoc search which outputs custom fields and then custom alert action is configured on Notable aggregation policies ( NEAP) action rules which trigger the action on KPI notab... See more...
I have a KPI alert using adhoc search which outputs custom fields and then custom alert action is configured on Notable aggregation policies ( NEAP) action rules which trigger the action on KPI notable event . alert_actions.conf has all the params defined. But $results.fieldname$ is always blank on the script.  results_file only have ITSI /KPI specific fields but do not have the custom fields.   How   
Thank you. I deleted the file and it worked great. 
This is meaning that you don't need a separate DS server until you have something like 50 UF Deployment Clients. Usually you should configure own app to manage that DS configuration to UFs. You cou... See more...
This is meaning that you don't need a separate DS server until you have something like 50 UF Deployment Clients. Usually you should configure own app to manage that DS configuration to UFs. You could use same or separate app for outputs.conf too. If you set those on installation phase then it's hard to change those later as those are configured under ...\etc\system\local which you cannot manage by DS.
Hi Have you look e.g Splunk Add-on for Unix and Linux https://splunkbase.splunk.com/app/833 to ingest those logs into Splunk? Usually it's best to use some TA as those do lot of need stuff like mak... See more...
Hi Have you look e.g Splunk Add-on for Unix and Linux https://splunkbase.splunk.com/app/833 to ingest those logs into Splunk? Usually it's best to use some TA as those do lot of need stuff like make inputs as a CIM complaint https://splunkbase.splunk.com/app/1621 Then you can easily use e.g. InfoSec app https://splunkbase.splunk.com/app/4240 to monitor what is happening in your environment. Those which has suffix -too_small is somenthing which haven't any sourcetype definitions on splunk side. Splunk just generate that name for those. You should do a real data onboarding for those files/sources. One other thing what you should check and change if needed. You should never run UF on those nodes as root. Use some other user like splunk or splunkfwd. Then your issue is that those user haven't access to all those logs and that you also needs to fix. r. Ismo
Hi actually this has changed on 9.x. Currently you can have newer UF/HF versions than Splunk server or SCP have. Earlier (pre 9) it was instructed that sever must have higher or equal version than... See more...
Hi actually this has changed on 9.x. Currently you can have newer UF/HF versions than Splunk server or SCP have. Earlier (pre 9) it was instructed that sever must have higher or equal version than UF/HF/IHF. I prefer to wait some time after a new version has released to see if there is any issues with new version. Just like I do with server side. Usually you could/should do those upgrades e.g. couple of time per year like any other OS/other tools. Of course when there is any security issue then you should do updates out of you normal update cycle. r. Ismo
Hi As this is quite old thread, please create a new question to get answer. I suppose that most of us, didn't read and try to find new comments/questions from old and answered threads. Based on fi... See more...
Hi As this is quite old thread, please create a new question to get answer. I suppose that most of us, didn't read and try to find new comments/questions from old and answered threads. Based on field name you try to convert epoch time to epoch?
What is your current reason why you are trying this and what is your original issue which you are solving?
Hi or is it possible to use this example with REST query and cURL on cli? https://community.splunk.com/t5/Other-Usage/Why-can-t-I-change-alert-with-REST-It-change-permission-from-app/td-p/646456 r.... See more...
Hi or is it possible to use this example with REST query and cURL on cli? https://community.splunk.com/t5/Other-Usage/Why-can-t-I-change-alert-with-REST-It-change-permission-from-app/td-p/646456 r. Ismo
regex101.com is your friend https://regex101.com/r/rB5kWs/1
Hi as this is quite old thread, it's better to create a new question to get someone to answer you. r. Ismo
So basically your issue is know if there is some data integrations which haven't sent events event those should? There are several apps and examples on community how this can solved.
Hi I think that you need a separate lookup file, which contains all users, which have capability to login into splunk. If user hasn't ever logged in, then (depending how you have configured your user... See more...
Hi I think that you need a separate lookup file, which contains all users, which have capability to login into splunk. If user hasn't ever logged in, then (depending how you have configured your users like splunk user, LDAP user, SSO users) it's quite probably that you haven't those names on your system. For that reason rest cannot return those to you. You need just replace that subquery [|rest....] on @richgalloway 's answer with inputlookup query for those user accounts. r. Ismo
Hi this is either DM acceleration or Report acceleration.   _ACCELERATE_111111-22222-333-4444-123456789_search_nobody_123456978_ACCELERATE_ Shows that it is under search & report app, it's owned b... See more...
Hi this is either DM acceleration or Report acceleration.   _ACCELERATE_111111-22222-333-4444-123456789_search_nobody_123456978_ACCELERATE_ Shows that it is under search & report app, it's owned by nobody.  123456978 is quite probably reports acceleration Summary ID. You could check this e.g from Settings -> Searches, Reports, and Alerts. Then just click one by one those reports which are accelerated and click that thunder mark. It opens a new screen where this Summary ID is. Probably there is at least REST query which you can also use. r. Ismo
Longer than yesterday helps though Ok - here are some thoughts I had getting around this, without having a chance to play with it yet. SEDCMD - looks as a possibility, while knowing it’s not g... See more...
Longer than yesterday helps though Ok - here are some thoughts I had getting around this, without having a chance to play with it yet. SEDCMD - looks as a possibility, while knowing it’s not going to be the newbie kind of thing. There is support for back ref, so I thought of coping a core meta field as an addition into each stock_id, and then split the structure to events by each stuck_id
Hi I must agree with @PickleRick that this is something where you should hire experienced splunk consultant with good knowledge of infra part too. You definitely need someone to help you! There are... See more...
Hi I must agree with @PickleRick that this is something where you should hire experienced splunk consultant with good knowledge of infra part too. You definitely need someone to help you! There are lot of missing information which are needed to help you to chose the correct path to do this. At least we are needing the next  are you now on onprem with hardware or some virtual environment are you on cloud AWS, Azure, GCP what is your target platform (still onprem with HW, virtual some cloud) are those S3 bucket in onprem, AWS or somewhere else what kind of connectivity you have between splunk server and S3 If you must do this by yourself w/o help by experience splunk consultant, I probably try the next approach, but this definitely depends on answers to above questions. set up additional server with new OS but with current splunk version migrate current splunk installation into it (e.g. https://community.splunk.com/t5/Deployment-Architecture/Splunk-Migration-from-existing-server-to-a-new-server/m-p/681647) update it to the target splunk version add a new SH to use it and migrate (move) SH side apps into it add a new Cluster master and copy indexer side apps & TAs into it's manager_apps add migrated node as 1st indexers into it add 2nd (and maybe 3rd) nodes as additional indexers into it If and only if you have enough fast storage network for S3 buckets, then you could enable smart store into this cluster If above is working without issues. Then stop original standalone instance and start production migration from scratch as you have proven that your test is working and you have step by step instructions how to do it. After you have done your real production migration change UFs and other sources to send events to this new environment. r. Ismo
If you find reindexed files/events this usually means that someone have removed splunk UF installation and reinstall it. Actually that means removing for _fishbucket directory on UF.
The easiest way is just create an own app where you add needed configuration files with correct attributes. Then install that app to all nodes where it is needed.  In server.conf [sslConfig] sslVer... See more...
The easiest way is just create an own app where you add needed configuration files with correct attributes. Then install that app to all nodes where it is needed.  In server.conf [sslConfig] sslVersions = tls1.2 sslVersionsForClient = tls1.2 In web.conf [settings] sslVersions = tls1.2   Put those conf files into your app's default folder with other needed confs. Then just install It into servers. r. Ismo
Hi All I am using Office365,  i have an office365 unified group and users are getting removed from this office365 group automatically everyday.  I want to get the data who has removed or added the us... See more...
Hi All I am using Office365,  i have an office365 unified group and users are getting removed from this office365 group automatically everyday.  I want to get the data who has removed or added the users to this group. When i use the below query, I am not getting any output please guide me. Lets say my group name is MyGroup1 and its email address is MyGroup1@contoso.com sourcetype=o365:management:activity (Operation="*group*") unifiedgroup="*MyGroup1*" | rename ModifiedProperties{}.NewValue AS ModAdd | rename ModifiedProperties{}.OldValue AS ModRem | rename UserId AS "Actioned By" | rename Operation AS "Action" | rename ObjectId AS "Member" | rename TargetUserOrGroupName as modifiedUser | table _time, ModAdd, ModRem, "Action", Member, "Actioned By" "modifiedUser" | stats dc values("modifiedUser") by Action "Actioned By"
Hi if/when this is Splunk supported TA then just create a support case. I supposing that there is some issue to reading those events from EntraId. After it has tried to read it but for unknown reaso... See more...
Hi if/when this is Splunk supported TA then just create a support case. I supposing that there is some issue to reading those events from EntraId. After it has tried to read it but for unknown reason it has failed with some, it will write check point (describe what it has read). Then it start with that check point on next round and miss some events which has incorrectly marked as read. Or something similar. Any how inform the creator of TA.