All Topics

Top

All Topics

I have a universal forwarder running on my Domain Controller which only captures logon/logff events. inputs.conf ``` [WinEventLog://Security] disabled = 0 current_only renderXml = 1 whitelist ... See more...
I have a universal forwarder running on my Domain Controller which only captures logon/logff events. inputs.conf ``` [WinEventLog://Security] disabled = 0 current_only renderXml = 1 whitelist = 4624, 4634 ``` In my Splunk server I set up forwarding to a 3rd party. outputs.conf ``` [tcpout] defaultGroup = nothing [tcpout:foobar] server = 10.2.84.209:9997 sendCookedData = false [tcpout-server://10.2.84.209:9997] ``` props.conf ``` [XmlWinEventLog:Security] TRANSFORMS-Xml=foo ``` Transforms.conf ``` [foo] REGEX = . DEST_KEY=_TCP_ROUTING FORMAT=foobar ``` Before creating/editing these conf files I am still seeing lots of non- Windows events being sent to the destination. With these confs in place I am not seeing any events being forwarded. What's the easiest fix to my conf files so that I only send XMLs to the 3rd party system? Thanks, Billy EDIT: What markup does this forum use? single/triple backticks dont work, nor is <pre></pre>
Hello,   I would like to know if there is any way to integrate Github cloud to Splunk cloud and from splunk how these logs can be forwarded to Rapid 7 SIEM??
I  have .gz syslog files but I am unable to fetch all files For each host(abc) it has 23 .tgz files   with extension like syslog.log1.gz until syslog.log.24.gz ...I only see the one with 24 ingest... See more...
I  have .gz syslog files but I am unable to fetch all files For each host(abc) it has 23 .tgz files   with extension like syslog.log1.gz until syslog.log.24.gz ...I only see the one with 24 ingested but not others ..for all others in internal logs I see "was already indexed as a non-archive, skipping" Log path /ad/logs/abc/syslog/syslog.log.24.gz Internal logs : 03-12-2024 14:59:59.590 -0700 INFO ArchiveProcessor [1944346 archivereader] - Archive with path="/ad/logs/abc/syslog/syslog.log.2.gz" was already indexed as a non-archive, skipping. 03-12-2024 14:59:59.590 -0700 INFO ArchiveProcessor [1944346 archivereader] - Finished processing file '/ad/logs/abc/syslog/syslog.log.2.gz', removing from stats> Should I try crcsalt or crclength ?
i have a query where my results looks like this Application1-Start Application1-Stop Application2-Start Application2-Stop Application3-Start Application3-Stop 10 4 12 7 70 30 12 ... See more...
i have a query where my results looks like this Application1-Start Application1-Stop Application2-Start Application2-Stop Application3-Start Application3-Stop 10 4 12 7 70 30 12 8 10 4 3 2 14 4 12 5 16 12 But i want to see the output as shown below is that possible??? Start Start Start Stop Stop Stop Application1 Application2 Application3 Application1 Application2 Application3 10 12 70 4 7 30 12 10 3 8 4 2 14 12 16 4 5 12
Hello, how can I ensure the data being sent to cool_index is rolled to cold when the data is 120 days old? The config I'll use   [cool_index] homePath = volume:hotwarm/cool_index/db coldPath = vol... See more...
Hello, how can I ensure the data being sent to cool_index is rolled to cold when the data is 120 days old? The config I'll use   [cool_index] homePath = volume:hotwarm/cool_index/db coldPath = volume:cold/cool_index/colddb thawedPath = $SPLUNK_DB/cool_index/thaweddb frozenTimePeriodInSecs = 10368000 #120 day retention maxTotalDataSizeMB = 60000 maxDataSize=auto repFactor=auto      am I missing something?
I have single instance splunk environment where the license is 100 gb there is another single instance using the same license .and we get data every day around 6 GB both combined . Instance A is very... See more...
I have single instance splunk environment where the license is 100 gb there is another single instance using the same license .and we get data every day around 6 GB both combined . Instance A is very fast but instance B is very slow .(both  have same resources) All searches and dashboards are really slow .For instance if I run a search to do a simple stats for 24 hrs ..it takes 25 seconds when compared to the other one which takes 2 seconds .I checked the job inspection which was showing  dispatch.evaluate.search = 12.84 dispatch.fetch.rcp.phase_0 =7.78  I want to know where should I start checking on the host and what are the steps to be taken 
This is an odd acceleration behavior that has us stumped... If some of you worked with Qualys Technology Add-on before, Qualys dump their knowledge base into a CSV file which we converted to kvStore ... See more...
This is an odd acceleration behavior that has us stumped... If some of you worked with Qualys Technology Add-on before, Qualys dump their knowledge base into a CSV file which we converted to kvStore with the following collections.conf accelerations enabled - The knowledge base has approx. 137,000 rows of about 20 columns. [qualys_kb_kvstore] accelerated_fields.QID_accel = {"QID": 1} replicate = true Then if you were to run the following query with lookup local= true and local=false (default). According to Job Inspector there was no real difference between lookup on search head vs. the indexers. Without the lookup command, the query takes 3 seconds to complete over 17 million events. With lookup added, it takes an extra 165 seconds for some reason with the accelerators turned on. index=<removed> (sourcetype="qualys:hostDetection" OR sourcetype="qualys_vm_detection") "HOSTVULN" | fields _time HOST_ID QID | stats count by HOST_ID, QID | lookup qualys_kb_kvstore QID AS QID OUTPUTNEW PATCHABLE | where PATCHABLE="YES" | stats dc(HOST_ID) ```Number of patchable hosts!``` An idea I am going to try is to add PATCHABLE as another accelerated field and see if that changes. This change will require me to wait until tomorrow. accelerated_fields.QID_accel = {"QID": 1, "PATCHABLE": 1} Is there something we're missing to help avoid the lookup taking extra 2-3 minutes?
I have a weird date/time value:  20240307105530.358753-360 I would like to make it more user friendly  2024/03/07 10:50:30 and drop the rest. %Y/%m/%d %H:%M:%S I know you can use sed for this,... See more...
I have a weird date/time value:  20240307105530.358753-360 I would like to make it more user friendly  2024/03/07 10:50:30 and drop the rest. %Y/%m/%d %H:%M:%S I know you can use sed for this, however, I am not familiar with sed syntax: For example: | rex mode=sed field=_raw "s//g" Any sed guru's out there?
I'm collecting all other logs ie. wineventlogs, splunkd logs the inputs.conf is accurate the splunk user has full access to the file   What are some non-splunk reasons that would prevent a file... See more...
I'm collecting all other logs ie. wineventlogs, splunkd logs the inputs.conf is accurate the splunk user has full access to the file   What are some non-splunk reasons that would prevent a file from being monitored?
if i had to write a document for myself on basic learning of splunk: to create a dashboard i can either use inputs like index,source,source fields or I can give a data set is that right? for that can... See more...
if i had to write a document for myself on basic learning of splunk: to create a dashboard i can either use inputs like index,source,source fields or I can give a data set is that right? for that can i write it like this or am i wrong with side headings: Understanding of input data:  Explore different methods of data input into Splunk, such as ingesting data from files, network ports, or APIs. Understanding of Data domains : Discover how to efficiently structure your data in Splunk using data models to drive analysis.
If I have the following table using columns DATE and USAGE, is there a way to create a 3rd column to display the difference in the USAGE column from Day X to previous day?   DATE USAGE Feb-28 ... See more...
If I have the following table using columns DATE and USAGE, is there a way to create a 3rd column to display the difference in the USAGE column from Day X to previous day?   DATE USAGE Feb-28 Feb-29 Mar-01 Mar-02 Mar-03 Mar-04 Mar-05 Mar-06 Mar-07 Mar-08 Mar-09 Mar-10 Mar-11 Mar-12 17.68 gb 18.53 gb 19.66 gb 21.09 gb 22.04 gb 23.21 gb 24.20 gb 24.94 gb 25.64 gb 26.80 gb 27.79 gb 29.07 gb 30.09 gb 31.01 gb
I'm running stats to find out which events I want to delete. Basically I'm finding the minimum "change_set" a particular "source" has. Now, I want to delete these sources with the least "change_set".... See more...
I'm running stats to find out which events I want to delete. Basically I'm finding the minimum "change_set" a particular "source" has. Now, I want to delete these sources with the least "change_set".   Note: the events also has a lot of attributes apart from change_set and source.   index=test | stats min(change_set) by source   (Now delete the events which has that source and change_set. ```   How can I write the delete operation with this query? (the most optimal way) @gcusello @ITWhisperer @scelikok @PickleRick    Thanks
I have finished doing the Cybersecurity Landscape(eLearning) course and passed, but it is still marked as "In Progress".Is it me or there is something wrong?
Good afternoon everyone, I need your help in this way. I have a stats sum with the wild card * |appendpipe [stats sum(*) as * by Number | eval UserName="Total By Number: "] and I need to format w... See more...
Good afternoon everyone, I need your help in this way. I have a stats sum with the wild card * |appendpipe [stats sum(*) as * by Number | eval UserName="Total By Number: "] and I need to format with commas the sum(*) as *. How I can do that? Thank you
We are using the Android agent for AppDynamics and R8 to obfuscate the code. The corresponding mapping file was uploaded, AppDynamics recognizes it and "deobfuscates" the stacktrace, but in the end b... See more...
We are using the Android agent for AppDynamics and R8 to obfuscate the code. The corresponding mapping file was uploaded, AppDynamics recognizes it and "deobfuscates" the stacktrace, but in the end both versions are identical and include obfuscated method names. This happened with multiple app releases and crashes. Locally I am perfectly able to retrace the stacktrace provided by AppDynamics with the uploaded mapping file.  Does someone have an idea what may be the reason for this? AppDynamics Gradle Plugin version: 23.6.0 Android Gradle Plugin version: 8.1.4
index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(REQUESTED)*") |stats latest(_time) as start_time by cwmessage.transId |join cwmessage.transId [se... See more...
index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(REQUESTED)*") |stats latest(_time) as start_time by cwmessage.transId |join cwmessage.transId [search index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(COMPLETED)*") |stats latest(_time) as cdx_time by cwmessage.transId ] [search index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(UPDATeD)*") |stats latest(_time) as upd_time by cwmessage.transId ] |join cwmessage.transId |eval cdx=cdx_time-start_time, upd=upd_time-cdx_time |table cwmessage.transId, cdx,upd From above query I'm using index query in multiple times, i want to use it as base search and call that in all nested searches for the dashboard. Please help me. Thanks
Hi Everyone, I am trying to replicate log modification that was possible with fluentd when using splunk-connect-for-kubernetes.       splunk_kubernetes_logging: cleanAuthtoken: tag... See more...
Hi Everyone, I am trying to replicate log modification that was possible with fluentd when using splunk-connect-for-kubernetes.       splunk_kubernetes_logging: cleanAuthtoken: tag: 'tail.containers.**' type: 'record_modifier' body: | # replace key log <replace> key log expression /"traffic_http_auth".*?:.*?".+?"/ # replace string replace "\"traffic_http_auth\": \"auth cleared\"" </replace>       Now since the above charts support ended we have switched to splunk-otel-collector. Along with this we also switched the logsengine: otel  and now having a hard time replicating this modification. Per the documentation I read this should come via processors (which is the agent), please correct me if I am wrong here. I have tried two processors but both doesn't work.  What I am missing here?     logsengine: otel agent: enabled: true config: processors: attributes/log_body_regexp: actions: - key: traffic_http_auth action: update value: "obfuscated" transform: log_statements: - context: log statements: - set(traffic_http_auth, "REDACTED")       This is new to me, can anyone point me where this logs modifiers can be applied.  Thanks, Ppal      
When I do an stats count my field it return the double of the real number index=raw_fe5_autsust Aplicacao=HUB Endpoint="*/" | eval Agrupamento=if(Agrupamento!="", Agrupamento, "AGRUPAMENTO_HOLD... See more...
When I do an stats count my field it return the double of the real number index=raw_fe5_autsust Aplicacao=HUB Endpoint="*/" | eval Agrupamento=if(Agrupamento!="", Agrupamento, "AGRUPAMENTO_HOLDING/CE") | eval Timestamp=strftime(_time, "%Y-%m-%d") | stats count by Agrupamento, Timestamp | sort -Timestamp I already tried dedup and when I count only by Timestamp it works fine
hey guys did someone ever happed to come through this problem. I'm using Splunk Cloud  I'm trying to extract a new field using regex but the data are under the source filed  | rex field=source "S... See more...
hey guys did someone ever happed to come through this problem. I'm using Splunk Cloud  I'm trying to extract a new field using regex but the data are under the source filed  | rex field=source "Snowflake\/(?<folder>[^\/]+)" this is the regex I'm using when i use it in the search it works perfect. but the main goal is to save this search as a permanent field. i know that the the field extraction draw from the "_raw" there is an option to direct the Cloud to pull from the source and save it a permanent field.
HI, I would like to know how can I create a new filter by field like "slack channel name" / phantom artifact id?  how its been created?  thanks!