All Topics

Top

All Topics

I have a search that looks like this:  index=dog sourcetype=cat earliest=-30d [| inputlookup LU1_siem_set_list where f_id=*$pick_f_id$* | stats values(mc) as search | eval search="mc=".mvjoin(searc... See more...
I have a search that looks like this:  index=dog sourcetype=cat earliest=-30d [| inputlookup LU1_siem_set_list where f_id=*$pick_f_id$* | stats values(mc) as search | eval search="mc=".mvjoin(search," OR mc=")] | stats latest(_time) by ip. what i see is : mc latest(_time) 00.00.01 1715477192 00.00.02 1715477192 00.00.03 1715477192 how to present this in a dashboard with time formatted. Thanks!                               
Hi, im working on creating a dashboard but I'm not familiar with time formatting is there a way some one can help on how to format time to strftime in this search to show on the dashboard: Index=a ... See more...
Hi, im working on creating a dashboard but I'm not familiar with time formatting is there a way some one can help on how to format time to strftime in this search to show on the dashboard: Index=a sourcetype=b earliest=-30d [|inputlookup LU0_siem_asset_list where f_id=*OS-03* | stats values(asset) as search | eval search=mvjoin(search,", OR ")] | fields src src_ip src_f_id _time | stats latest(_time) values(*) by src_ip. Thanks!
We have been using the sentinelone app for splunk cloud for over year lately we are getting the below error. Tried regenerating the api key, no joy error_message="[HTTP 404] https://127.0.0.1:8... See more...
We have been using the sentinelone app for splunk cloud for over year lately we are getting the below error. Tried regenerating the api key, no joy error_message="[HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/sentinelone_app_for_splunk/configs/conf-authhosts/278c8-73f2-d67a-0211-782344bd8727?output_mode=json" error_type="<class 'splunk.ResourceNotFound'>" error_arguments="[HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/sentinelone_app_for_splunk/configs/conf-authhosts/278c8-73f2-d67a-0211-782344bd8727?output_mode=json" error_filename="s1_client.py" error_line_number="164" input_guid="f6cf841-8787-761-d820-d0d36cebfa" input_name="Activity" Error filename: s1_client.py Error line number: 164 Input guid: f6cf841-8787-761-d820-d0d36cebfa Message: [HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/sentinelone_app_for_splunk/configs/conf-authhosts/278c8-73f2-d67a-0211-782344bd8727?output_mode=json  
Hi. I have a lookup file with phone numbers broken down into their parts, so: cc,npa,nxx,list 1,210,5551234,good 1,512,7779876,bad My event stream has e164 phone numbers, so: +12105551234 +1... See more...
Hi. I have a lookup file with phone numbers broken down into their parts, so: cc,npa,nxx,list 1,210,5551234,good 1,512,7779876,bad My event stream has e164 phone numbers, so: +12105551234 +15127779876 i'd like to use the lookup command, but can't find a way to natively join this data and looking for ideas. currently i am able to use join, like this:       ... | join number [ | inputlookup lookupfile.csv | eval number="+".cc.npa.nxx       but I've learned through this group mostly to try and avoid 'join' because of limitations and 'it basically applying SQL and this ain't SQL'. I thought about 'breaking apart' the number field into (3) fields and then passing several looks like in the example page linked above, but this feels backwards when what i'd prefer to do is join the data in the csv. So another idea was to create a report that does that and creates a new CSV (|output lookup), but that feels unnecessary, too. Any thoughts? THANK YOU!        
I need to see all events with fields that have "PROD*" in name, e.g. "PROD deploy", "PROD update", etc. `index=myIndex sourcetype=mySourceType "PROD*"="*"` doesn't work ..and if event has "PROD*" i... See more...
I need to see all events with fields that have "PROD*" in name, e.g. "PROD deploy", "PROD update", etc. `index=myIndex sourcetype=mySourceType "PROD*"="*"` doesn't work ..and if event has "PROD*" in field name I need to get the value How is it possible?
Hi all, I've a csv file with 3 columns ip, earliest, latest and over 400 rows.  I'm trying to return all evens associated with the IP for an hour before and after the interesting request time.  Th... See more...
Hi all, I've a csv file with 3 columns ip, earliest, latest and over 400 rows.  I'm trying to return all evens associated with the IP for an hour before and after the interesting request time.  The search below works for a single row but I can't figure out how treat each row as a unique search and compile the results at the end.  What appears to happen when I upload multiple rows in the csv is the search will run for all interesting IPs from the earliest earliest value to the latest latest value. It kind of meets the intent but is very wasteful as the index is huge and the times span several years with days/months between them. Is what I'm trying to achieve possible? index=myindex client_ip_address earliest latest [| inputlookup ip_list_2.csv | eval ip = "*" . 'Extracted IP' . "*" | eval earliest=strptime('REQUEST_TIME', "%m/%d/%y %H:%M")-(60*60) | eval latest=strptime('REQUEST_TIME', "%m/%d/%y %H:%M")+(60*60) | fields ip earliest latest ]
I have been asked to alert when a user deletes an index.   I have found the event in the _internal index, but there is no username attached to the event. index=_internal event=removeIndex   ... See more...
I have been asked to alert when a user deletes an index.   I have found the event in the _internal index, but there is no username attached to the event. index=_internal event=removeIndex   05-13-2024 21:57:01.509 +0000 INFO IndexProcessor [1036423 indexerPipe_1] - event=removeIndex index=deleteme is newly marked for deletion, avoided restart There does not appear to be a corresponding event in the _audit index, so I'm drawing a blank on how to attribute the event to a user account. The solution provided here doesn't appear to work, as I'm not seeing an operations or object field in the _audit index. 
Hello, I have a question. I am a member of the SRE team at Ring Organization. I'm wondering if I can get a trial access to Splunk APM. The main idea is to create an overview of services. I would ... See more...
Hello, I have a question. I am a member of the SRE team at Ring Organization. I'm wondering if I can get a trial access to Splunk APM. The main idea is to create an overview of services. I would like to familiarize myself with Splunk APM to see how user-friendly it is and what capabilities it offers. Of course, if you have any wiki or documentation I can study further, that would be great. Thank you in advance!
Hey guys, I am working a report that needs to show any new employees coming into the company for the last 30 days. Right now I have a report constructed that pulls data for over the last 30 days on a... See more...
Hey guys, I am working a report that needs to show any new employees coming into the company for the last 30 days. Right now I have a report constructed that pulls data for over the last 30 days on all employees for the company. How can I filter out this report to only show employees added to the company the previous month over the last 30 days? I will schedule this report to run weekly.
I installed Splunk Enterprise 9.2.0.1 without FIPS mode on and now I found out, I need to have it on. Luckily, I haven't done too much work, just one server and few Universal forwarders.    I belie... See more...
I installed Splunk Enterprise 9.2.0.1 without FIPS mode on and now I found out, I need to have it on. Luckily, I haven't done too much work, just one server and few Universal forwarders.    I believe, I have to scrap the current installation of SH/Indexer and all the UFs, correct? There is not way to enable it in current install as far as I can tell.   Also, are there any files, I could save, so I can reuse them?  
Hello Community! I am trying to set up a search to monitor Powershell commands from Windows hosts; specifically, I am starting from: an index with the full messages related to PS commands, contain... See more...
Hello Community! I am trying to set up a search to monitor Powershell commands from Windows hosts; specifically, I am starting from: an index with the full messages related to PS commands, contained in a field named "Message" (related, for example, to event codes 4101, 800, etc...) a .csv file, with the list of commands I would like to monitored, contained in a column named "PS_command". From these premises, I have already constructed a search that leverages on inputlookup to search the strings from the PS-monitored.csv file to the index field Message, outputting the result in a table, as the following (adding also details from the index: _time, host and EventCode).   index="wineventlog" | search ( [|inputlookup PS-monitored.csv | eval Message= "*" + PS_command + "*" | fields Message] ) | table _time host EventCode Message    This, despite not being the most elegant solution (with the addition of wildcard characters *), is currently working, however I would also like to include the original search field (PS_command column from PS-monitored.csv) to the final table. I tried to experiment a bit with lookup command, and with join options, without success; does anyone have some suggestions? Finally, I would like avoid using heavy commands, such as join, if at all possible. Thanks in advance!
Need a report based on previous day  I have source ip segment xx.xx.xx.xx/28, & destination ip segment xx.xx.xx/24  outcome of query should provide below Date and start + end time of the connecti... See more...
Need a report based on previous day  I have source ip segment xx.xx.xx.xx/28, & destination ip segment xx.xx.xx/24  outcome of query should provide below Date and start + end time of the connection USERNAME APPLICATION:PORT & PROTOCOL APPLICATION SEGMENTS ACCESS POLICY NAME ACTION how can i create customized dashboard, please suggest.
All - I am new to Splunk and trying to figure out a way to return a matched command from a CSV table with inputlookup. I have ioc_check table containing command strings and description as below: ... See more...
All - I am new to Splunk and trying to figure out a way to return a matched command from a CSV table with inputlookup. I have ioc_check table containing command strings and description as below: commands description 7z a -t7z -r Compress data for exfiltration vssadmin.* Delete Shadows Deletion of Shadow copy *wmic*process*call*create* Uses WMI to create processes wmic*get*http Using wmic to get and run files from internet   I am using this lookup table commands string against CrowdStrike CommandLine to hunt for any matches commands run by any user in our environment. So when the CommandLine filed from CrowdStrike logs matches any commands string from lookup table, it should generate an alert. What we are trying to achieve is when there is an alert it should also tell us the description of the matching command so we know which command matched with the CrowdStrike CommandLine. The final result should be like this: CommandLine description commands curl -g -k -H user-agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;) --connect-timeout 5 -d status=2f8bIrZMNKpfunrfOgXZEIywAf18sgF0O6Xgo%3d --retry 0 -L http[:]//wvfg.wetmet[.]net/api/serverservice/heart.php?serverid=1u%2bYbg%2bn25POYs4MAuxnjxQMMDoNMbhWQoixYAF0bjP%2f%2fw%3d Using wmic to get and run files from internet wmic*get*http         I have come up with below search it gives me an alert but not able to display the matching command and description. Any help would be much appreciated! index=crowdstrike event_simpleName=ProcessRollup2 [| inputlookup ioc_check | eval CommandLine="*"+commands+"*" | fields CommandLine] | lookup ioc_check commands OUTPUT description | table _time, CommandLine, description, commands
IT leaders are choosing Splunk Cloud as an ideal cloud transformation platform to drive business resilience,  efficiency and scale. By moving deployments to Splunk Cloud Platform (SaaS) and outsourci... See more...
IT leaders are choosing Splunk Cloud as an ideal cloud transformation platform to drive business resilience,  efficiency and scale. By moving deployments to Splunk Cloud Platform (SaaS) and outsourcing infrastructure management tasks to Splunk, organizations are able to focus more on driving business value and innovation.  Watch the on-demand cloud event > Get Resiliency in the Cloud! To hear the industry experts from Pacific Dental Services, IDC and The Futurum Group on how to build a strong foundation of resilience and security for your move to the cloud. You will learn about the drivers that are leading enterprises to move to Splunk Cloud Platform and the benefits they are reaping with >40% increase in operational efficiencies, >300% increase in scalability, and >20% savings in cost.  Get more information on how to Migrate to Splunk Cloud Platform. 
We’re making some changes to Data Management and Infrastructure Inventory for AWS. The Data Management page, which serves as the landing page for getting data into Splunk Observability, now displays ... See more...
We’re making some changes to Data Management and Infrastructure Inventory for AWS. The Data Management page, which serves as the landing page for getting data into Splunk Observability, now displays Available Integrations and Data Config tools as tabs. Additionally, the lists of EC2 instances and EKS clusters, along with their Collector installation status, have been elevated by being included as tabs alongside the list of AWS integrations. These improvements will make it easier for you to not only get data into Splunk Observability but also manage the incoming data more effectively.
Hello, I'm still new to SPLUNK and still learning so apologies for any incorrect naming  I have a search in SPLUNK that runs daily and does some filtering to then lookup an indexed .csv for addi... See more...
Hello, I'm still new to SPLUNK and still learning so apologies for any incorrect naming  I have a search in SPLUNK that runs daily and does some filtering to then lookup an indexed .csv for additional information. The indexed .csv is injected into SPLUNK daily and the files are called: "YYYY-MM-DD Report.csv".  The search is supposed to take that into consideration and look at the latest report based on the date in the subject. It currently looks like this: | rename Letter as C1111 | table A1111, B1111, C1111 | join type=left C1111 [ search earliest=-24h host="AAA" index="BBB" sourcetype="CCC" | eval dateFile=strftime(now(), "%Y-%m-%d") | where like(source,"%".dateFile."%Report.csv") | rename "Number" as C1111 | eval C1111=lower(C1111) | fields C1111, "1 xxxx","2 yyyy","3 zzzz"] | table A1111, B1111, C1111, "1 xxxx","2 yyyy","3 zzzz" This used to work but has stopped a few days back and I'm unable to figure out what the issue might be. 
Hi guys! I'm trying to allow Splunk to access the internet to browse and download apps. So far I have opened up "apps.splunk.com" and "splunkbase.splunk.com", but it doesn't seem to do the trick. An... See more...
Hi guys! I'm trying to allow Splunk to access the internet to browse and download apps. So far I have opened up "apps.splunk.com" and "splunkbase.splunk.com", but it doesn't seem to do the trick. Any other URL's I need to allow?
Hello. Im new at Splunk. Recently, I am trying to create and sign my own TLS certificates, following this official guide. https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/Howtoself-signcer... See more...
Hello. Im new at Splunk. Recently, I am trying to create and sign my own TLS certificates, following this official guide. https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/Howtoself-signcertificates However, splunkd.log keep on showing this error: Error setting up SSL for TCP data input from file=inputs.conf stanza="SSL": Can't read key file /opt/splunk/etc/auth/mycerts/myServerCertificate.pem SSL error code=151441516 message="error:0906D06C:PEM routines:PEM_read_bio:no start line"   First, By following the guide, I created: private key of root certificate authority certificate, which is myCertAuthPrivateKey.key CSR for the certificate, which is myCertAuthCertificate.csr root certificate authority certificate, which is myCertAuthCertificate.pem Moreover, I created a server certificate and sign them with the root certificate authority certificate. private key for the server certificate, which is myServerPrivateKey.key CSR for the server certificate, which is myServerCertificate.csr Server certificate, which is myServerCertificate.pem   Basically, following the guide, i have 6 files in mycerts folder, and one srl file. This Splunk Master is a master node connects to 3 indexers (clustering). I followed this guide to modify the configuration files, which is the inputs.conf and server.conf i believe. Ref: https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/ConfigureSplunkforwardingtousesignedcertificates 6+1 files for certificate. /opt/splunk/etc/system/local/server.conf [general] ... [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/mycerts/myCertAuthCertificate.pem sslPassword = mypassword ... /opt/splunk/etc/system/local/inputs.conf [splunktcp-ssl:9997] disabled=0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/myServerCertificate.pem sslPassword = mypassword requireClientCert = true sslVersions = *,-ssl2 Everytime i do service splunk restart, i still get the SSL error. Anyone know why and whats happening?? Same error is also happening in any other indexes. (same steps as i mentioned above)
Hi I was wondering if there was a way I could blacklist the following event based on the event code and the account name under the Subject field. So I want to blacklist events of code 4663 with a sub... See more...
Hi I was wondering if there was a way I could blacklist the following event based on the event code and the account name under the Subject field. So I want to blacklist events of code 4663 with a subject name of COMPUTER8-55$. What would the regex for that look like? 05/10/2024 01:05:35 PM LogName=Sec EventCode=4670 EventType=0 ComputerName=myComputer.net SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=10000000 Keywords=Audit Success TaskCategory=Authorization Policy Change OpCode=Info Message=Permissions on an object were changed.   Subject: Security ID: S-0-20-35 Account Name: COMPUTER8-55$ Account Domain: myDomain Logon ID: 0x3E7   Object: Object Server: Security Object Type: Token Object Name: - Handle ID: 0x1718   Process: Process ID: 0x35c Process Name: C:\Windows\System32\svchost.exe  
Pre-requisite Machine Agent installed on any Linux box that has access to the DynamoDb Database Linux box should have permission to fetch CloudWatch metrics Installation If your Linux B... See more...
Pre-requisite Machine Agent installed on any Linux box that has access to the DynamoDb Database Linux box should have permission to fetch CloudWatch metrics Installation If your Linux Box is on EC2, then please select the IAM Role associated with that EC2 instance and add “CloudWatchFullAccess” permission. Once done, SSH inside the box where your Machine Agent is running.  In the Machine Agent Home folder, go to the Monitors folder and create a directory called "DynamoDb". In my case, MA_HOME = /opt/appdynamics/ma cd /opt/appdynamics/ma/monitors mkdir DynamoDb​ Inside the 'DynamoDb' folder, create a file called script.sh with the below content: NOTE: Please edit TABLE_NAME and AWS_REGION with your desired TABLE_NAME and AWS_REGION #!/bin/bash # Define the DynamoDB table name TABLE_NAME="aws-lambda-standalone-dynamodb" # Define your AWS region AWS_REGION="us-west-2" # Change this to your region # List of all metrics you want to fetch declare -a METRICS=("ConsumedReadCapacityUnits" "ConsumedWriteCapacityUnits" "ProvisionedReadCapacityUnits" "ProvisionedWriteCapacityUnits" "ReadThrottleEvents" "WriteThrottleEvents" "UserErrors" "SystemErrors" "ConditionalCheckFailedRequests" "SuccessfulRequestLatency" "ReturnedItemCount" "ReturnedBytes" "ReturnedRecordsCount") # Define the time period (in ISO8601 format) START_TIME=$(date --date='60 minutes ago' --utc +%Y-%m-%dT%H:%M:%SZ) END_TIME=$(date --utc +%Y-%m-%dT%H:%M:%SZ) # Loop through each metric and fetch the data for METRIC_NAME in "${METRICS[@]}" do # Fetch the metric data using AWS CLI METRIC_VALUE=$(aws cloudwatch get-metric-statistics --namespace AWS/DynamoDB \ --metric-name $METRIC_NAME \ --dimensions Name=TableName,Value=$TABLE_NAME \ --start-time "$START_TIME" \ --end-time "$END_TIME" \ --period 3600 \ --statistics Average \ --query 'Datapoints[0].Average' \ --output text \ --region $AWS_REGION) # Check if metric value is 'None' or empty if [ "$METRIC_VALUE" == "None" ] || [ -z "$METRIC_VALUE" ]; then METRIC_VALUE="0" else # Round the metric value to the nearest whole number METRIC_VALUE=$(printf "%.0f" "$METRIC_VALUE") fi # Echo the metric in the specified format echo "name=Custom Metrics|DynamoDB|$TABLE_NAME|$METRIC_NAME,value=$METRIC_VALUE" done​ If you have multiple tables, then use the script below: #!/bin/bash # List of DynamoDB table names declare -a TABLE_NAMES=("Table1" "Table2" "Table3") # Add your table names here # Define your AWS region AWS_REGION="us-west-2" # Change this to your region # List of all metrics you want to fetch declare -a METRICS=("ConsumedReadCapacityUnits" "ConsumedWriteCapacityUnits" "ProvisionedReadCapacityUnits" "ProvisionedWriteCapacityUnits" "ReadThrottleEvents" "WriteThrottleEvents" "UserErrors" "SystemErrors" "ConditionalCheckFailedRequests" "SuccessfulRequestLatency" "ReturnedItemCount" "ReturnedBytes" "ReturnedRecordsCount") # Define the time period (in ISO8601 format) START_TIME=$(date --date='60 minutes ago' --utc +%Y-%m-%dT%H:%M:%SZ) END_TIME=$(date --utc +%Y-%m-%dT%H:%M:%SZ) # Loop through each table for TABLE_NAME in "${TABLE_NAMES[@]}" do # Loop through each metric and fetch the data for the current table for METRIC_NAME in "${METRICS[@]}" do # Fetch the metric data using AWS CLI METRIC_VALUE=$(aws cloudwatch get-metric-statistics --namespace AWS/DynamoDB \ --metric-name $METRIC_NAME \ --dimensions Name=TableName,Value=$TABLE_NAME \ --start-time "$START_TIME" \ --end-time "$END_TIME" \ --period 3600 \ --statistics Average \ --query 'Datapoints[0].Average' \ --output text \ --region $AWS_REGION) # Check if metric value is 'None' or empty if [ "$METRIC_VALUE" == "None" ] || [ -z "$METRIC_VALUE" ]; then METRIC_VALUE="0" else # Round the metric value to the nearest whole number METRIC_VALUE=$(printf "%.0f" "$METRIC_VALUE") fi # Echo the metric in the specified format echo "name=Custom Metrics|DynamoDB|$TABLE_NAME|$METRIC_NAME,value=$METRIC_VALUE" done done Great. Create another file called monitor.xml with the below content: <monitor> <name>DynamoDb monitoring</name> <type>managed</type> <description>DynamoDb monitoring</description> <monitor-configuration> </monitor-configuration> <monitor-run-task> <execution-style>periodic</execution-style> <name>Run</name> <type>executable</type> <task-arguments> </task-arguments> <executable-task> <type>file</type> <file>script.sh</file> </executable-task> </monitor-run-task> </monitor>​ Great work!! Now, Llet’s restart your Machine Agent Once you are done, you will be able to see your DynamoDB metrics in the AppDynamics Machine Agent’s metric browser