All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi this is either DM acceleration or Report acceleration.   _ACCELERATE_111111-22222-333-4444-123456789_search_nobody_123456978_ACCELERATE_ Shows that it is under search & report app, it's owned b... See more...
Hi this is either DM acceleration or Report acceleration.   _ACCELERATE_111111-22222-333-4444-123456789_search_nobody_123456978_ACCELERATE_ Shows that it is under search & report app, it's owned by nobody.  123456978 is quite probably reports acceleration Summary ID. You could check this e.g from Settings -> Searches, Reports, and Alerts. Then just click one by one those reports which are accelerated and click that thunder mark. It opens a new screen where this Summary ID is. Probably there is at least REST query which you can also use. r. Ismo
Longer than yesterday helps though Ok - here are some thoughts I had getting around this, without having a chance to play with it yet. SEDCMD - looks as a possibility, while knowing it’s not g... See more...
Longer than yesterday helps though Ok - here are some thoughts I had getting around this, without having a chance to play with it yet. SEDCMD - looks as a possibility, while knowing it’s not going to be the newbie kind of thing. There is support for back ref, so I thought of coping a core meta field as an addition into each stock_id, and then split the structure to events by each stuck_id
Hi I must agree with @PickleRick that this is something where you should hire experienced splunk consultant with good knowledge of infra part too. You definitely need someone to help you! There are... See more...
Hi I must agree with @PickleRick that this is something where you should hire experienced splunk consultant with good knowledge of infra part too. You definitely need someone to help you! There are lot of missing information which are needed to help you to chose the correct path to do this. At least we are needing the next  are you now on onprem with hardware or some virtual environment are you on cloud AWS, Azure, GCP what is your target platform (still onprem with HW, virtual some cloud) are those S3 bucket in onprem, AWS or somewhere else what kind of connectivity you have between splunk server and S3 If you must do this by yourself w/o help by experience splunk consultant, I probably try the next approach, but this definitely depends on answers to above questions. set up additional server with new OS but with current splunk version migrate current splunk installation into it (e.g. https://community.splunk.com/t5/Deployment-Architecture/Splunk-Migration-from-existing-server-to-a-new-server/m-p/681647) update it to the target splunk version add a new SH to use it and migrate (move) SH side apps into it add a new Cluster master and copy indexer side apps & TAs into it's manager_apps add migrated node as 1st indexers into it add 2nd (and maybe 3rd) nodes as additional indexers into it If and only if you have enough fast storage network for S3 buckets, then you could enable smart store into this cluster If above is working without issues. Then stop original standalone instance and start production migration from scratch as you have proven that your test is working and you have step by step instructions how to do it. After you have done your real production migration change UFs and other sources to send events to this new environment. r. Ismo
If you find reindexed files/events this usually means that someone have removed splunk UF installation and reinstall it. Actually that means removing for _fishbucket directory on UF.
The easiest way is just create an own app where you add needed configuration files with correct attributes. Then install that app to all nodes where it is needed.  In server.conf [sslConfig] sslVer... See more...
The easiest way is just create an own app where you add needed configuration files with correct attributes. Then install that app to all nodes where it is needed.  In server.conf [sslConfig] sslVersions = tls1.2 sslVersionsForClient = tls1.2 In web.conf [settings] sslVersions = tls1.2   Put those conf files into your app's default folder with other needed confs. Then just install It into servers. r. Ismo
Hi All I am using Office365,  i have an office365 unified group and users are getting removed from this office365 group automatically everyday.  I want to get the data who has removed or added the us... See more...
Hi All I am using Office365,  i have an office365 unified group and users are getting removed from this office365 group automatically everyday.  I want to get the data who has removed or added the users to this group. When i use the below query, I am not getting any output please guide me. Lets say my group name is MyGroup1 and its email address is MyGroup1@contoso.com sourcetype=o365:management:activity (Operation="*group*") unifiedgroup="*MyGroup1*" | rename ModifiedProperties{}.NewValue AS ModAdd | rename ModifiedProperties{}.OldValue AS ModRem | rename UserId AS "Actioned By" | rename Operation AS "Action" | rename ObjectId AS "Member" | rename TargetUserOrGroupName as modifiedUser | table _time, ModAdd, ModRem, "Action", Member, "Actioned By" "modifiedUser" | stats dc values("modifiedUser") by Action "Actioned By"
Hi if/when this is Splunk supported TA then just create a support case. I supposing that there is some issue to reading those events from EntraId. After it has tried to read it but for unknown reaso... See more...
Hi if/when this is Splunk supported TA then just create a support case. I supposing that there is some issue to reading those events from EntraId. After it has tried to read it but for unknown reason it has failed with some, it will write check point (describe what it has read). Then it start with that check point on next round and miss some events which has incorrectly marked as read. Or something similar. Any how inform the creator of TA.
Hi I'm not 100% sure, but I have understanding that alerts etc. are configures as local time not at server time. So could you check who has configured that alert and what TZ he/she has configured on... See more...
Hi I'm not 100% sure, but I have understanding that alerts etc. are configures as local time not at server time. So could you check who has configured that alert and what TZ he/she has configured on browser. r. Ismo
Hi as this is a generating command https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Commandsbytype#Generating_commands you must add "|" in front of it. It also must be the fir... See more...
Hi as this is a generating command https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Commandsbytype#Generating_commands you must add "|" in front of it. It also must be the first command on your SPL (or inside subquery).
Please accept that solution as it works.
You should also look command timewrap which can help you with this kind of comparisions.
Or someone has added more servers under linux audit log collections. Then best options is look when amount has increased and is node amount also increased on splunk side. If not then just look if the... See more...
Or someone has added more servers under linux audit log collections. Then best options is look when amount has increased and is node amount also increased on splunk side. If not then just look if then content on any individual nodes has increased and changed. Based on that you have more to discuss with you linux and/or splunk DS admins.
Hi one thing what you should do is to check how events are in raw data. Probably the easiest way is check it via "Event Actions -> Show Source".   In that way you will see how it really is. Aft... See more...
Hi one thing what you should do is to check how events are in raw data. Probably the easiest way is check it via "Event Actions -> Show Source".   In that way you will see how it really is. After that you know (especially with json) are there any space or other character which you need to take care on your strings. r. Ismo
Hi @raculim .. @PickleRick 's suggestion works fine, tested (9.3.0)  
My apologies.  I was switching between two different approaches and the filters got crossed.  To use the subsearch method above, modify that line to | where isnotnull(OS) index=A sourcetype="Any" | ... See more...
My apologies.  I was switching between two different approaches and the filters got crossed.  To use the subsearch method above, modify that line to | where isnotnull(OS) index=A sourcetype="Any" | stats values("IP address") as "IP address" by Hostname OS | append [search index=B sourcetype="foo" | stats values(Reporting_Host) as Reporting_Host] | eventstats values(eval(lower(Reporting_Host))) as Reporting_Host | where isnotnull(OS) | mvexpand "IP address" | eval match = if(lower(Hostname) IN (Reporting_Host) OR 'IP address' IN (Reporting_Host), "ok", null()) | stats values("IP address") as "IP address" values(match) as match by Hostname OS | fillnull match value="missing"  Depending on your deployment, combining the two index searches could improve performance, like this (index=A sourcetype="Any") OR (index=B sourcetype="foo") | eventstats values(eval(lower(Reporting_Host))) as Reporting_Host | where index != "B" | mvexpand "IP address" | eval match = if(lower(Hostname) IN (Reporting_Host) OR 'IP address' IN (Reporting_Host), "ok", null()) | stats values("IP address") as "IP address" values(match) as match by Hostname OS | fillnull match value="missing" But eventstats and mvexpand could be bigger performance hindrances.  There could be ways to avoid mvexpand; there could be ways to improve eventstats.  But unless you can isolate the main contributor to slowness, they are not worth exploring. Performance is a complex subject with any querying language.  You can start by doing some basic tests.  For example, run those two subsearches separately and compare with combined search.  If the total time is comparable, index search is the main hindrance.  That will be very difficult to improve.  Another test could be to add dedup before stats.  And so on.
The "Expires" setting doesn't stop your alert from running after 24 hours. Your alert will continue to run daily indefinitely; the expiration only prevents repeated triggering within 24 hours of the ... See more...
The "Expires" setting doesn't stop your alert from running after 24 hours. Your alert will continue to run daily indefinitely; the expiration only prevents repeated triggering within 24 hours of the last trigger, helping to avoid alert fatigue for ongoing issues.   Hope this helps. 
ok, its the right time for me work with these Slack addons/apps and Splunk Enterprise. 
Hi @qs_chuy .. good catch. let me check this and revert back.  my mindvoice to me... some more "detailed understanding" required between -  the tstats, datamodels, accelerated, non-accelerated, thx
You make it sound so easy, but I should say that I'm a Splunk Observability newbie. If I add an APM Detector it doesn't give me many avenues to customise it, and if I create a Custom Detector I seem ... See more...
You make it sound so easy, but I should say that I'm a Splunk Observability newbie. If I add an APM Detector it doesn't give me many avenues to customise it, and if I create a Custom Detector I seem to be in the area where newbies shouldn't be. However, I tried adding "errors_sudden_static_v2" for the "A" signal, and besides which is an Add Filter button. Is this where I need to "filter for the errors, extract the customerid and count by customerid"? My use case sounds like it should be a fairly common one, so is there an explanatory guide somewhere on doing things like this? I haven't found one yet. If I show the SignalFlow for my APM Detector, this is what it looks like:   from signalfx.detectors.apm.errors.static_v2 import static as errors_sudden_static_v2 errors_sudden_static_v2.detector( attempt_threshold=1, clear_rate_threshold=0.01, current_window='5m', filter_=( filter('sf_environment', 'prod') and ( filter('sf_service', 'my-service-name') and filter('sf_operation', 'POST /api/{userId}/endpointPath') ) ), fire_rate_threshold=0.02, resource_type='service_operation' ) .publish('TeamPrefix my-service-name /endpointPath errors')   The {userId} in the sf_operation is what I want to group the results on and only alert if a particular userId is generating a high number of errors compared to everybody else. Thank you.
I got around this by installing Slack Add-on for Splunk.