All Topics

Top

All Topics

Hi, By chance, I discovered that a power user with admin rights disabled sysmon agent and splunk forwarder on his computer to gain some extra CPU for his daily tasks. To react quicker to this type o... See more...
Hi, By chance, I discovered that a power user with admin rights disabled sysmon agent and splunk forwarder on his computer to gain some extra CPU for his daily tasks. To react quicker to this type of "issue" in the future, I would like to have some alerts in splunk informing me whenever any host, initially domain joined workstations and servers, might have stopped reporting events. Did someone implement something like this already? any good article to follow? I plan to create a lookup table using ldapsearch and then, an alert detecting which hosts from that table are not present in a basic listing host search on sysmon index for example. Any other better or simpler approach?   many thanks
Using the new  appdynamics-gradle-plugin 24.1.0 makes the build fail with the following exception: Execution failed for task ':ControlCenterApp:expandReleaseArtProfileWildcards'. > A failure o... See more...
Using the new  appdynamics-gradle-plugin 24.1.0 makes the build fail with the following exception: Execution failed for task ':ControlCenterApp:expandReleaseArtProfileWildcards'. > A failure occurred while executing com.android.build.gradle.internal.tasks.ExpandArtProfileWildcardsTask$ExpandArtProfileWildcardsWorkAction > invalid END header (bad central directory offset) The relevant stacktrace part: ... Caused by: java.util.zip.ZipException: invalid END header (bad central directory offset) at com.android.tools.profgen.ArchiveClassFileResourceProvider.getClassFileResources(ClassFileResource.kt:62) at com.android.tools.profgen.ProfgenUtilsKt.expandWildcards(ProfgenUtils.kt:60) at com.android.build.gradle.internal.tasks.ExpandArtProfileWildcardsTask$ExpandArtProfileWildcardsWorkAction.run(ExpandArtProfileWildcardsTask.kt:120) at com.android.build.gradle.internal.profile.ProfileAwareWorkAction.execute(ProfileAwareWorkAction.kt:74) at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:63) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:66) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:62) at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:100) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.lambda$execute$0(NoIsolationWorkerFactory.java:62) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:44) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:41) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.workers.internal.AbstractWorker.executeWrappedInBuildOperation(AbstractWorker.java:41) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.execute(NoIsolationWorkerFactory.java:59) at org.gradle.workers.internal.DefaultWorkerExecutor.lambda$submitWork$0(DefaultWorkerExecutor.java:170) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runExecution(DefaultConditionalExecutionQueue.java:187) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.access$700(DefaultConditionalExecutionQueue.java:120) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner$1.run(DefaultConditionalExecutionQueue.java:162) at org.gradle.internal.Factories$1.create(Factories.java:31) at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:264) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:128) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:133) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runBatch(DefaultConditionalExecutionQueue.java:157) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.run(DefaultConditionalExecutionQueue.java:126) ... 2 more Environment: com.android.tools.build:gradle:8.3.1 Any ideas what is happening here? It kind of sounds like there is something wrong with the library-packaging itself.. but thats just a guess and in fact i could open the renamed jar with a zip-tool myself. cheers, Jerg
Hello, I am trying to install splunk operator using the operator Hub in OpenShift cluster through webconsole but it is failing with below error  installing: waiting for deployment splunk-operator-... See more...
Hello, I am trying to install splunk operator using the operator Hub in OpenShift cluster through webconsole but it is failing with below error  installing: waiting for deployment splunk-operator-controller-manager to become ready: deployment "splunk-operator-controller-manager" not available: Deployment does not have minimum availability. I also tried to follow the below steps mentioned in this document How to deploy Splunk in an OpenShift environment | by Eduardo Patrocinio | Medium  but found below as well in CLI  oc apply -f https://raw.githubusercontent.com/patrocinio/openshift/main/splunk/standalone-standalone.yaml error: resource mapping not found for name: "standalone" namespace: "splunk-operator" from "https://raw.githubusercontent.com/patrocinio/openshift/main/splunk/standalone-standalone.yaml": no matches for kind "Standalone" in version "enterprise.splunk.com/v1alpha2" ensure CRDs are installed first my requirement is to create a instance of splunk in a namespace in OCP. Kindly help on this Splunk Operator for Kubernetes 
Hello!  So here is a doozy.  We have blacklists in place using Regx. In particular this one: [WinEventLog://Microsoft-Windows-Sysmon/Operational] source= XmlWinEventLog:Microsoft-Windows-Sysmon/O... See more...
Hello!  So here is a doozy.  We have blacklists in place using Regx. In particular this one: [WinEventLog://Microsoft-Windows-Sysmon/Operational] source= XmlWinEventLog:Microsoft-Windows-Sysmon/Operational disabled = false start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = $XmlRegex= (C:\\Program Files\\Preton\\PretonSaver\\PretonService.exe)   along with some other blacklists in sequence as blacklist2, blacklist3 etc.  Now, on my and some colleagues local machines where I have tested the inputs.conf file to see if we are blacklisting correctly, it all works, no logs come into splunk cloud.   BUT when this is deployed using our deployment server to the estate the blacklist doesnt work.  Does anybody know where i can troubleshoot this from?  Ive ran the btool and confs come out fine.  we are local UF agents going to a DS then to Splunk Cloud.   
Hi Splunkers, Let me provide a bit of background,   We are ingesting logs into splunk using an API from our DLP service provider using HEC. The script that run the API read the last 24 hours of... See more...
Hi Splunkers, Let me provide a bit of background,   We are ingesting logs into splunk using an API from our DLP service provider using HEC. The script that run the API read the last 24 hours of events,  get the results in Json format and send it to splunk cloud using HEC. The problem I have is every pull some of the events get duplicates  .... eg the event_id get duplicated as the events changes its status on daily basis so the the script catches that.  so doing reporting is a nightmare, I know I can deduplicated but is there a easier way for splunk to remove those duplicates and maintain the same event_id?
Hi Everyone I was wondering how I would go about a stats count for 2 fields where 1 field depends on the other.  So for example we have a field called service version and another called service... See more...
Hi Everyone I was wondering how I would go about a stats count for 2 fields where 1 field depends on the other.  So for example we have a field called service version and another called service operation. Operations aren't necessarily exclusive to a version but I was wondering if it was possible to do a stats count where it would display something like: Service 1 - Operation 1: *how many* - Operation 2: *How many* and so on. Is something like this possible? 
Hello Everyone, I've encountered an issue where certain customers appear to have duplicate ELB access logs. During a routine check, I noticed instances of identical events being logged with the exac... See more...
Hello Everyone, I've encountered an issue where certain customers appear to have duplicate ELB access logs. During a routine check, I noticed instances of identical events being logged with the exact same timestamp, which shouldn't normally occur. I'm utilizing the Splunk_TA_aws app for ingesting logs, specifying each S3 bucket and the corresponding ELB log prefix as inputs. My search pattern is index=customer-index-{customer_name} sourcetype="aws:elb:accesslogs", aimed at isolating the data per customer. Upon reviewing the original logs directly within the S3 buckets, I confirmed that the duplicates are not present at the source; they only appear once ingested into Splunk. This leads me to wonder if there might be a configuration or processing step within Splunk or the AWS Add-on that could be causing these duplicates. Has anyone experienced a similar issue or could offer insights into potential causes or solutions? Any advice or troubleshooting tips would be greatly appreciated. here we can see the same timestamp for the logs:  if im adding | dedup _raw the number of events going down to "6535" from 12,710    Thank you in advance for your assistance.  
Hi all,  I was wondering if there's a way to create a search that I can add to a dashboard that'll present the Peak day and what the volume is over a 30 day period?  Essentially when loading into... See more...
Hi all,  I was wondering if there's a way to create a search that I can add to a dashboard that'll present the Peak day and what the volume is over a 30 day period?  Essentially when loading into the dashboard I was hoping it could save whatever day it occurred and not be replaced until a larger peak occurs. Assuming that's even possible.  Possibly worded this poorly so feel free to ask any questions about what I'm trying to achieve. 
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help ... See more...
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help how to do that. please find below details  ================================================================================ Query index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"@timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  --> Please help Here  |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time ================================================================================ Screenshot   ------------------------------------------------------------------------------------------- raw data {"@timestamp":"2024-04-04T02:25:59.366Z","level":"INFO","message":"Snapshot event published: SnapshotEvent(version=SnapshotVersion(sourceSystem=dbI-LDN, entityType=ACCOUNT, subType=, date=2024-04-03, version=1, snapshotSize=326718, uuid=8739e273-cedc-482b-b696-48357efc8704, eventTimestamp=2024-04-04T02:24:52.762129638), status=CREATED)","thread":"snapshot-checker-3","loggerName":"com.db.sdda.dc.kafka.snapshot.writer.InternalEventSender"} Show syntax highlighted   Need only time 02:25:59 AM/PM in last column
Good morning, I have some alerts that I have set up that are not triggering. They are Defender events. If I run the query in a normal search if I get the results of the alerts that I miss. However, ... See more...
Good morning, I have some alerts that I have set up that are not triggering. They are Defender events. If I run the query in a normal search if I get the results of the alerts that I miss. However, for some reason the alerts are not triggered: neither the email is sent, nor do they appear in the Triggered alerts section. This is my alert   and this is one of the events for which it should have triggered and has not triggered: I also tried disabling the throttle in case there was a problem and it was leaking. I also checked to see if the search had been skipped but it was not.   Any idea?
Good morning, I did the Splunk Enterprise update from version 9.1.1 to 9.2.1 but the web interface no longer shows the connected machines. What can I verify? Thanks OS Centos 7
I am planning to provide basic splunk session to my team. Can you help if any cheatsheet available online which I can download easily.
=========================================== Query used  index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND messag... See more...
=========================================== Query used  index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  -- this is not working |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time =========================================== This how raw data looks like i  would like to extract only time , also suggest how can i covert to AM/PM   Kindly provide solution.    
There a few questions in regards to the Interactions for Dashboard studios. This is the current code I have: {     "drilldown": "true",     "visualizations": {         "viz_ar3dmyq9": {       ... See more...
There a few questions in regards to the Interactions for Dashboard studios. This is the current code I have: {     "drilldown": "true",     "visualizations": {         "viz_ar3dmyq9": {             "type": "splunk.line",             "dataSources": {                 "primary": "ds_jjp4wUrz"             },             "eventHandlers": [                 {                     "type": "drilldown.setToken",                     "options": {                         "tokens": [                             {                                 "token": "click_time",                                 "key": "row._time.value"                             }                         ]                     }                 }, { "type": "drilldown.linkToSearch", "options": { "query": "SomeSearchQuery", "earliest": "$click_time$", "latest": "$global_time.latest$", "type": "custom", "newTab": true } }             ],             "options": {                 "seriesColorsByField": {},                 "dataValuesDisplay": "minmax",                 "y2AxisTitleText": ""             }         },         "viz_leeY0Yzv": {             "type": "splunk.markdown",             "options": {                 "markdown": "**Value of Clicked row : $click_time$**",                 "backgroundColor": "#ffffff",                 "fontFamily": "Times New Roman",                 "fontSize": "extraLarge"             }         }     },     "dataSources": {         "ds_uHpCvdwq": {             "type": "ds.search",             "options": {                 "enableSmartSources": true,                 "query": "someCustomQuery",                 "refresh": "10s",                 "refreshType": "delay"             },             "name": "CustomQuery"         },         "ds_jjp4wUrz": {             "type": "ds.search",             "options": {                 "query": "AnotherCustomQuery",                 "refresh": "10s",                 "refreshType": "delay"             },             "name": "CustomQuery2"         }     },     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-60m@m,now"             },             "title": "Global Time Range"         }     },     "layout": {         "type": "absolute",         "options": {             "width": 1440,             "height": 960,             "display": "auto"         },         "structure": [             {                 "item": "viz_ar3dmyq9",                 "type": "block",                 "position": {                     "x": 0,                     "y": 0,                     "w": 1440,                     "h": 330                 }             },             {                 "item": "viz_leeY0Yzv",                 "type": "block",                 "position": {                     "x": 310,                     "y": 780,                     "w": 940,                     "h": 30                 }             }         ],         "globalInputs": [             "input_global_trp"         ]     },     "description": "Tracking objects",     "title": "Info" } The line chart maps the data from 'AnotherCustomQuery'  (ds_jjp4wUrz), which is a query returning a chart that gets the max object_times for the 3 objects at one minute intervals: .... | chart span=1m max(object_time) OVER _time by object Producing a 3 line chart, each line representing each object. I can get the time the user clicked on, tokenized in "click_time", but a few puzzles remain: 1. When the user clicks on a object line, how do I get the line's object_name and pass it along to my search query? 2. For the "drilldown.linkToSearch", how can I add 1 minute to the click_time and add it into the latest?      Is it possible I don't need that range and search at a specific time?  3. When I open the dashboard, the first click search says "invalid earliest time", but subsequent clicks on the chart says it's working fine. Is there are particular reason why this is happening?
Hello    We received an alert from the Upgrade Readiness App about this app not being compatible with Python 3. This app appears to be an internal Splunk app. Does anyone know anything about it? ... See more...
Hello    We received an alert from the Upgrade Readiness App about this app not being compatible with Python 3. This app appears to be an internal Splunk app. Does anyone know anything about it? Splunk Cloud version: 9.1.2312.103   Thank you!  
Hi All, I have an issue where the Data Model Wrangler app can no longer see most tags where all other apps can. The Data Model Wrangler app only sees 30 tags (None of the CIM tags) where other apps ... See more...
Hi All, I have an issue where the Data Model Wrangler app can no longer see most tags where all other apps can. The Data Model Wrangler app only sees 30 tags (None of the CIM tags) where other apps see 830 tags. The Data Model Wrangler app used to be able to see all tags and all panels were displaying correctly. Now we get no results for the Field Quality Data score due to tag visibility. The permissions on the tags are everyone read and globally shared. Has anyone experienced anything similar where one app cannot see tags that all other apps can see? Thanks
sample log: {"date" : "2021-01-01 00:00:00.123 | dharam=fttc-pb-12312-esse-4 | appLevel=INRO | appName=REME_CASHE_ATTEMPT_PPI | env=sit | hostName=apphost000adc | pointer=ICFD | applidName=http.ab.w... See more...
sample log: {"date" : "2021-01-01 00:00:00.123 | dharam=fttc-pb-12312-esse-4 | appLevel=INRO | appName=REME_CASHE_ATTEMPT_PPI | env=sit | hostName=apphost000adc | pointer=ICFD | applidName=http.ab.web.com|news= | list=OUT_GOING | team=norpass | Category=success | status=NEW | timeframe=20", "tags": {"host": "apphost000adc" , "example": "6788376378jhjgjhdh2h3jhj2", "region": null, "resource": "add-njdf-tydfth-asd-1"}} used below regex to extract all fields  , but  one field is not getting extracted, that is timeframe |regex  _raw= (\w+)\=(.+?) \| how to modify my regex to extract timeframe field as well.
Hi all, getting to grips with SPL and would be forever grateful if someone could lend their brain for the below:   I've got the lookup in the format below: (Fields) -->  host, os, os version ... See more...
Hi all, getting to grips with SPL and would be forever grateful if someone could lend their brain for the below:   I've got the lookup in the format below: (Fields) -->  host, os, os version ----------------------------------------- (Values) ---> Server01, Windows, Windows Server 2019   But in my case, this lookup has 3000 field values, I want to know their source values in Splunk (This lookup was generated by a match condition with another, so I KNOW that these hosts are present in my Splunk env)   I basically need a way to do the following:   "| tstats values(sources) where index=* host=(WHATEVER IS IN MY LOOKUP HOST FIELD) by index, host"   But i can't seem to find a way, I did try to originally match the below:   | tstats values(source) where index=* by host, index | join type=inner host | [|inputlookup mylookup.csv | fields host | dedup host]   But my results were too large to handle by Splunk, plz help
My environment consists of 1 search head, 1 manager, and 3 indexers. I added another search head so that I can put enterprise security on it but when I run any search i get this error.  (only rea... See more...
My environment consists of 1 search head, 1 manager, and 3 indexers. I added another search head so that I can put enterprise security on it but when I run any search i get this error.  (only reason i did index=* was to show that ALL indexes are like this and no matter what I search this happens. What I'm the most confused about is why is the bottom portion (where the search results are) greyed out and I cant interact with it.  Here's the last few lines from the search.log if more is required i can send more of the log. The log is just really long. 04-03-2024 18:00:38.937 INFO SearchStatusEnforcer [11858 StatusEnforcerThread] - sid=1712181568.6, newState=BAD_INPUT_CANCEL, message=Search auto-canceled 04-03-2024 18:00:38.937 ERROR SearchStatusEnforcer [11858 StatusEnforcerThread] - SearchMessage orig_component=SearchStatusEnforcer sid=1712181568.6 message_key= message=Search auto-canceled 04-03-2024 18:00:38.937 INFO SearchStatusEnforcer [11858 StatusEnforcerThread] - State changed to BAD_INPUT_CANCEL: Search auto-canceled 04-03-2024 18:00:38.945 INFO TimelineCreator [11862 phase_1] - Commit timeline at cursor=1712168952.000000 04-03-2024 18:00:38.945 WARN DispatchExecutor [11862 phase_1] - Execution status=CANCELLED: Search has been cancelled 04-03-2024 18:00:38.945 INFO ReducePhaseExecutor [11862 phase_1] - Ending phase_1 04-03-2024 18:00:38.945 INFO UserManager [11862 phase_1] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.948 INFO UserManager [11858 StatusEnforcerThread] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.950 INFO DispatchManager [11855 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='1712181568.6', username='b.morin') 04-03-2024 18:00:38.950 INFO UserManager [11855 searchOrchestrator] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.950 ERROR ScopedAliveProcessToken [11855 searchOrchestrator] - Failed to remove alive token file='/opt/splunk/var/run/splunk/dispatch/1712181568.6/alive.token'. No such file or directory 04-03-2024 18:00:38.950 INFO SearchOrchestrator [11852 RunDispatch] - SearchOrchestrator is destructed. sid=1712181568.6, eval_only=0 04-03-2024 18:00:38.952 INFO UserManager [11861 SearchResultExecutorThread] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.961 INFO SearchStatusEnforcer [11852 RunDispatch] - SearchStatusEnforcer is already terminated 04-03-2024 18:00:38.961 INFO UserManager [11852 RunDispatch] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.961 INFO LookupDataProvider [11852 RunDispatch] - Clearing out lookup shared provider map 04-03-2024 18:00:38.962 INFO dispatchRunner [10908 MainThread] - RunDispatch is done: sid=1712181568.6, exit=0