All Topics

Top

All Topics

Iam getting different results for same query when checked in statistics and visualizations, Attaching both screenshots        
I made a graph that send time data at click point. I use "fieldformat" to change time data shown. This is my code about time part at this graph.  | rename _time AS Date | fieldformat Date = strft... See more...
I made a graph that send time data at click point. I use "fieldformat" to change time data shown. This is my code about time part at this graph.  | rename _time AS Date | fieldformat Date = strftime(Date,"%Y-%m-%d")  So the token data send like this "2024-01-23"   I want to set the time with the data received from the token about another graph. For example, If time_token send me "2024-01-23", I want to show only the data from 2024-01-23 in another graph. I tried this code, but it not worked. (Maybe it cause about format changing) | where _time = time_token How could I solve this problem? 
lets say i have a query which is giving no result at present date but may give in future .  In this query I have calculated timeval = strftime(_time,"%y-%m-%d")  , since there is not data coming so ... See more...
lets say i have a query which is giving no result at present date but may give in future .  In this query I have calculated timeval = strftime(_time,"%y-%m-%d")  , since there is not data coming so "_time" will be empty hence timeval does not give any result . But still I have to show timeval with the help of present time , how can i do that .  i also used at the end of query appendpipe[stats count| where count==0  eval timeval=strftime(now(),%d/%m/%Y) | where count==0] but still no result.
Installed universal forwarder credential package and UF agent in a Windows Machine. Still not receiving data. Restart of splunk forwarder done. Installation of both package is with same user i.e. ro... See more...
Installed universal forwarder credential package and UF agent in a Windows Machine. Still not receiving data. Restart of splunk forwarder done. Installation of both package is with same user i.e. root. Unable to even receive any type of data from the windows OS.Need assistance.
Hi Splunkers, today I have a "curiosity" about an architectural design I examinated last week. The idea is the following: different regions (the 5 continents, in a nutshell), every one with its set ... See more...
Hi Splunkers, today I have a "curiosity" about an architectural design I examinated last week. The idea is the following: different regions (the 5 continents, in a nutshell), every one with its set of log sources and Splunk Components. All Splunk "items" are on prem: Forwarder, Indexers, SH and so on. More over, every region has 2 SH: one with Enterprise Security and another one without it. Untile now, "nothing new under the sun", like we say in Italy. The new element, I men new for me and my experience, is the following one: there is a "centralized" cluster of SH, each one with Enterprise Security installed on it, that should collect the notables events from every regional ES. So, the flow about those component should be: Europe ES Notables -> "Centralized" ES Cluster America ES Notables -> "Centralized" ES Cluster And so on. So, my wonder is: is there any doc about forward Notables events from a ES platform to another one? I searched but I didn't find anything about that (probabile I searched bad, I know).  
I have JSON files which I am trying to event split as the JSON contains multiple events within each log. Here is an example of what the log would look like.     { "vulnerability": [ { ... See more...
I have JSON files which I am trying to event split as the JSON contains multiple events within each log. Here is an example of what the log would look like.     { "vulnerability": [ { "event": { "sub1": { "complexity": "LOW" }, "sub2": { "complexity": "LOW" } }, "id": "test", "description": "test", "state": "No Known", "risk_rating": "LOW", "sources": [ { "date": "test" } ], "additional_info": [ { "test": "test" } ], "was_edited": false }, { "event": { "sub1": { "complexity": "LOW" }, "sub2": { "complexity": "LOW" } }, "id": "test", "description": "test", "state": "No Known", "risk_rating": "LOW", "sources": [ { "date": "test" } ], "additional_info": [ { "test": "test" } ], "was_edited": false } ], "next": "test", "total_count": 109465 }      In this example there would be two separate events that I need extracted out. I am essentially trying to pull out the event1 and event2 nests. Each log should have this same exact JSON format but there could be any number of events included in them.  First event     { "event": { "sub1": { "complexity": "LOW" }, "sub2": { "complexity": "LOW" } }, "id": "test", "description": "test", "state": "No Known", "risk_rating": "LOW", "sources": [ { "date": "test" } ], "additional_info": [ { "test": "test" } ], "was_edited": false }     Second event   { "event": { "sub1": { "complexity": "LOW" }, "sub2": { "complexity": "LOW" } }, "id": "test", "description": "test", "state": "No Known", "risk_rating": "LOW", "sources": [ { "date": "test" } ], "additional_info": [ { "test": "test" } ], "was_edited": false }       I also want to exclude the opening      { "vulnerability": [     and closing      ], "next": "test", "total_count": 109465 }      portions of the log files.   Am I missing something on how to set this sourcetype up? I have the following currently but that does not seem to be working LINE_BREAKER = \{(\r+|\n+|\t+|\s+)"event":
hi, i am setting up a search head/indexer setup.  i have port 9997 listening on indexer, i configured searchhead to send to indexer (since i have the files being sent to search head).  i can see th... See more...
hi, i am setting up a search head/indexer setup.  i have port 9997 listening on indexer, i configured searchhead to send to indexer (since i have the files being sent to search head).  i can see the syn packets being sent from search head to indexer, but thats about it. i am not sure what the indexer is doing about it, its not sending any error back or anything. capture tcp dump on indexer     capture tcp dump and logs from searchhead.   let me know what i need to do to fix this. thank you in advanced  
When going to CMC -> Forwarders -> Forwarders: deployment, I see that we have 19k+ forwarders, which is completely inaccurate. We have more like 900. It shows 18k+ as missing, and the list has instan... See more...
When going to CMC -> Forwarders -> Forwarders: deployment, I see that we have 19k+ forwarders, which is completely inaccurate. We have more like 900. It shows 18k+ as missing, and the list has instances decommissioned years ago.  I thought I could fix this by telling it to rebuild the forwarder assets via the button under VMC -> Forwarders -> Forwarder monitor setup, but when I click on this, it processes for about a minute, and then nothing changes. The description makes me think it is supposed to clear out the sim_forwarder_assets.csv lookup and rebuild it using only data it sees within the time frame I selected (24 hours). If I open up the lookup, all the entries it had previously are still there.  Am I misunderstanding how this works, or is something broken?
I am sure its fine, but this TA seems a little off (logo and the 'Built By') .. given who Wiz are, what they do and their high profile nature recently of disrupting some bad-guys .. I am keen for oth... See more...
I am sure its fine, but this TA seems a little off (logo and the 'Built By') .. given who Wiz are, what they do and their high profile nature recently of disrupting some bad-guys .. I am keen for others views on this "Built by product-integrations product-integrations"   .. strange.. and the logo seems pixilated.  Our team has recently had some "luck" in getting things vetted that really shouldn't be (and yes we reported) .. so simply saying 'its passed App Vetting" isn't enough for us.
My original time format in the search is  eventID: d7d2d438-cc61-4e74-9e9a-3fd8ae96388d    eventName: StartInstances    eventSource: ec2.amazonaws.com    eventTime: 2024-01-30T05:00:27Z    event... See more...
My original time format in the search is  eventID: d7d2d438-cc61-4e74-9e9a-3fd8ae96388d    eventName: StartInstances    eventSource: ec2.amazonaws.com    eventTime: 2024-01-30T05:00:27Z    eventType: AwsApiCall I am not able to convert it using the strptime function  eval dt_year_epoc = strptime(eventTime, "%Y-%m-%dThh:mm:ssZ") eval dt_day= strftime(dt_year_epoc, "%d") Nothing comes up in dt_day      
How can i take the eventName , instanceId and eventTime in a Pivot Table from the search below : index=aws_cloudtrail sourcetype="aws:cloudtrail" (eventName="StartInstances" OR eventName="StopInstan... See more...
How can i take the eventName , instanceId and eventTime in a Pivot Table from the search below : index=aws_cloudtrail sourcetype="aws:cloudtrail" (eventName="StartInstances" OR eventName="StopInstances" OR eventName="StartDBInstance" OR eventName="StopDBInstance" OR eventName="StartDBCluster" OR eventName="StopDBCluster") AND (userIdentity.type="AssumedRole" AND userIdentity.sessionContext.sessionIssuer.userName="*sched*") | spath "requestParameters.instancesSet.items{}.instanceId" | search "requestParameters.instancesSet.items{}.instanceId"="i-0486ba14134c4355b" | spath "responseElements.instancesSet.items{}.instanceId" | spath "recipientAccountId" Events : awsRegion: us-east-1    eventCategory: Management    eventID: 3a80a688-fa82-4950-b823-69ffc3283862    eventName: StartInstances    eventSource: ec2.amazonaws.com    eventTime: 2024-01-30T11:00:38Z    eventType: AwsApiCall    eventVersion: 1.09    managementEvent: true    readOnly: false    recipientAccountId: XXXXXXXXXXX    requestID: b404437a-ee56-4531-842e-1b10c01f01d3    requestParameters: { [-]      instancesSet: { [-]        items: [ [-]          { [-]            instanceId: i-0486ba14134c4355b          }        ]      }    }
Under Splunk DB Connect, we have data inputs created which will periodically pull data from our SQLServer database and put it into our indexes. The SQL queries used have a rising column which acts a... See more...
Under Splunk DB Connect, we have data inputs created which will periodically pull data from our SQLServer database and put it into our indexes. The SQL queries used have a rising column which acts as a checkpoint column for these inputs. Things were working fine until KVStore went down on the Splunk server. To bring it up, we followed these steps which worked. Stop the search head that has the stale KV store member. Run the command splunk clean kvstore --local. Restart the search head. Run the command splunk show kvstore-status to verify synchronization. But, after doing this, Splunk DB Connect stopped writing any new data to the indexes. The DB Connect logs are filled with error that look like this for all data inputs: loading checkpoint for input title=CDA_Eportal_JobUserMapping loading checkpoint by checkpointKey=63863acecd230000be007648 from KV Store for input title=CDA_Eportal_JobUserMapping error loading checkpoint for input title=CDA_Eportal_JobUserMapping java.lang.RuntimeException: Can't find checkpoint for DB Input CDA_Eportal_JobUserMapping I am unable to even edit the data inputs to manually add checkpoints as they fail while saving. Is there any way to fix all checkpoints or clear all of them so that data gets written to the indexes again? What should I do to fix this issue?  
I need to backfill some missing data into the summary index. However, there are already a few data present in the same index. Therefore, I only want to backfill the remaining events, and the data tha... See more...
I need to backfill some missing data into the summary index. However, there are already a few data present in the same index. Therefore, I only want to backfill the remaining events, and the data that is already present should not be injected again. I am currently using the 'fill_summary_index.py' script, but during testing, it seems to inject duplicate data, indicating that the deduplication function is not working correctly in this script. Please help me by providing a proper script to address this issue.
Hey folks, does anyone know of a straightforward way to get a count of the number of times each playbook is used as a subplaybook? I know you're able to click into the playbook and look to see where ... See more...
Hey folks, does anyone know of a straightforward way to get a count of the number of times each playbook is used as a subplaybook? I know you're able to click into the playbook and look to see where it's being used, but I was hoping to do so at a large scale without having to click into every single playbook. I've got some changes coming that will require a fair number of playbooks to be updated and was hoping to use the count to help determine where to prioritize our effort.
Ever since upgrading to version 2.1.4 of Cofense Triage Add-On, we get hundreds of these errors in our _internal logs: 01-30-2024 13:20:42.094 -0500 ERROR ScriptRunner [1503161 TcpChannelThread] - s... See more...
Ever since upgrading to version 2.1.4 of Cofense Triage Add-On, we get hundreds of these errors in our _internal logs: 01-30-2024 13:20:42.094 -0500 ERROR ScriptRunner [1503161 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py execute': /opt/splunk/etc/apps/TA-cofense-triage-add-on-for-splunk/bin/ta_cofense_triage_add_on_for_splunk/aob_py3/solnlib/utils.py:153: UserWarning: _get_all_passwords is deprecated, please use get_all_passwords_in_realm instead.   It does not affect log ingestion, but would like help in figuring out how to suppress the errors.
How to get the list of   indexes/sources that aren't being used in Splunk for last 90 days. can you anyone suggest query to get the index/sourcetype not used in any of knowledge object. 
I am working on creating an alert from Splunk.  In my search I am creating a variable using eval, but that is not used in the result table.  But I would like to use it in the email subject and body. ... See more...
I am working on creating an alert from Splunk.  In my search I am creating a variable using eval, but that is not used in the result table.  But I would like to use it in the email subject and body.     index=applications sourcetype=aws:cloudwatchlogs ((Job="*prod-job1*") OR (Job="*prod-job2*")) | eval emailTime=strftime(now(),"%m/%d/%Y") | stats latest(_time) as latest(s3Partition) as s3Partition latest(field1) as field1 latest(field2) as field2 latest(emailTime) as emailTime by table_name | search field2 ="*" emailTime=* | eval diff=(field2-field1) | eval evt_time=strftime(_time, "%Y-%m-%d") | eval partition_date=substr(s3Partition, len("event_creation_time=")+1, len("yyyy-mm-dd")) | where isnotnull(table_name) and isnotnull(emailTime) and ( evt_time == partition_date ) | table table_name, field1, field2, diff | sort raw_table_name | rename table_name AS "Table Name" field1 AS "Field1 count" field2 AS "Field2 count" diff as "Count Difference"       I tried using it like  -    $result.partition_date$  and  $result.emailTime$    -    in the subject and body, but the value is not getting substituted -  it appears  empty in both the places. Is it possible to use this value in email without using it in the table for the alert? Thank you  
Our current SOAR servers, fresh install on AWS EC2s, 500's each night. Upon investigation, it looks like there's this error in the logs: File "/opt/soar/usr/python39/lib/python3.9/site-packages/psyc... See more...
Our current SOAR servers, fresh install on AWS EC2s, 500's each night. Upon investigation, it looks like there's this error in the logs: File "/opt/soar/usr/python39/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) django.db.utils.OperationalError: connection to server on socket "/tmp//.s.PGSQL.6432" failed: No such file or directory Is the server running locally and accepting connections on that socket? On a healthy server, that file is present. On a 500-error server, it's missing. Is there an explanation of why that might be going missing? Issue is temporarily resolved by stopping and starting phantom again.  I think it might be related to PostgreSQL or pgbouncer. 
I have a splunk distributed system with 3 indexers, 3 search heads, a manger, and 2 heavy forwarders.  I am attempting to deploy the DB Connect application to the HF and the SHC.  The SHC has 3 membe... See more...
I have a splunk distributed system with 3 indexers, 3 search heads, a manger, and 2 heavy forwarders.  I am attempting to deploy the DB Connect application to the HF and the SHC.  The SHC has 3 member nodes and the deployer on the manger node.  Ideally, this would all be done with ansible, sadly, the deployer gets in the way.  I can deploy to the HF with ansible, but the deployer keeps removing the db connect app on the SHC. That said, to deploy I install the app on the manager node, install the drivers, and then copy it to the ...shcluster/apps directory  and run the splunk schcluster apply command.   I've done this both manually and using ansible.   When I run the apply the deployer does not put the entire app on the search heads, it only puts the default and metadata directories on the Search heads in the splunk_app_db_connect directory.    When I go into the manage  apps on the GUI I see the app installed but it is not visible.  I would prefer not to use the GUI for management and perform all management task via the cli and ansible.  The code is stored in a version control system and gives not only control over the deployments but also trakcs who did what, when, why, and how.   So I guess there are multiple questions.   Why is the deployer not pushing the entire application to the search heads? How can I disable the deployer and just use ansible?
While creating HEC token through putty by using below command, we are getting error like "Couldn't request server info: Couldn't complete HTTP request: Connection refused".  Please provide any info o... See more...
While creating HEC token through putty by using below command, we are getting error like "Couldn't request server info: Couldn't complete HTTP request: Connection refused".  Please provide any info or solution. /opt/splunk/bin/splunk http-event-collector create Appd_Splunk -uri http://<IP>:8089 -description "This is a Appd token" -disabled 1 -index toll_alrt