All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@richgalloway  we are getting these below error although splunk is up and running and configuration is also good 0-03-2023 08:04:43.963 -0400 ERROR TcpOutputFd [5866 TcpOutEloop] - Connectio... See more...
@richgalloway  we are getting these below error although splunk is up and running and configuration is also good 0-03-2023 08:04:43.963 -0400 ERROR TcpOutputFd [5866 TcpOutEloop] - Connection to host=10.246.250.154:9998 failed   10-04-2023 08:02:47.688 -0400 WARN TcpOutputFd [3703313 TcpOutEloop] - Connect to 10.246.250.155:9998 failed. No route to host 10-04-2023 08:02:47.750 -0400 WARN TcpOutputFd [3703313 TcpOutEloop] - Connect to 10.246.250.156:9998 failed. No route to host
The official docs trouble shooting page begs to differ  https://docs.splunk.com/Documentation/AddOns/released/MSO365/Troubleshooting#:~:text=Azure%20Active%20Directory.-,Certificate%20verify%20faile... See more...
The official docs trouble shooting page begs to differ  https://docs.splunk.com/Documentation/AddOns/released/MSO365/Troubleshooting#:~:text=Azure%20Active%20Directory.-,Certificate%20verify%20failed%20(_ssl.c%3A741)%20error%20message,-If%20you%20create
This subsearch iterates <<FIELD>> between IP and OS.  So, both. (<<FIELD>> is not meta code; it is part of foreach syntax.)
The app is using: /{splunk_home}/splunk/lib/python3.7/site-packages/certifi/cacert.pem which is the issue. The app is not using /{splunk_home}/etc/auth/cacert.pem rather than any certifi library cace... See more...
The app is using: /{splunk_home}/splunk/lib/python3.7/site-packages/certifi/cacert.pem which is the issue. The app is not using /{splunk_home}/etc/auth/cacert.pem rather than any certifi library cacert.pem
Hi, I am new to splunk metrics search. I am AWS/EBS metrics to splunk. I want to calculate the average throughput and number of IOPS for my Amazon Elastic Block Store (Amazon EBS) volume. I found ... See more...
Hi, I am new to splunk metrics search. I am AWS/EBS metrics to splunk. I want to calculate the average throughput and number of IOPS for my Amazon Elastic Block Store (Amazon EBS) volume. I found solutions the solution in AWS: https://repost.aws/knowledge-center/ebs-cloudwatch-metrics-throughput-iops, I don't know how to search it in Splunk.  This is the max I can do atm    | mpreview index=my-index | search namespace="AWS/EBS" metricName=VolumeReadOps    Really appreciate, if someone help me out, 
It is a little unclear how to help you as you haven't provided (anonymised) examples of the events you are dealing with. For example, do you get one event per host, with all their risks; one event pe... See more...
It is a little unclear how to help you as you haven't provided (anonymised) examples of the events you are dealing with. For example, do you get one event per host, with all their risks; one event per risk, with all the hosts; or, one event per host per risk, i.e. one host, one risk in each event. Also, coalesce() does not function the way you seem to be using it - it doesn't concatenate the fields, it merely finds the first non-null field in the list.
because i cannot use the eval 2 times
Hi @yuanliu  i think i want as your example. but what field should i put for below eval? [eval <<FIELD>> = if(mvindex(<<FIELD>>, 0) == mvindex(<<FIELD>>, 1), mvindex(<<FIELD>>, 0), mvzip(origins, <... See more...
Hi @yuanliu  i think i want as your example. but what field should i put for below eval? [eval <<FIELD>> = if(mvindex(<<FIELD>>, 0) == mvindex(<<FIELD>>, 1), mvindex(<<FIELD>>, 0), mvzip(origins, <<FIELD>>, ":"))] | fields - origins is it os or ip?
Because there are three fields, you need to be more descriptive about how want the differences to be highlighted.  Maybe you can illustrate different data combinations and desired results? To start,... See more...
Because there are three fields, you need to be more descriptive about how want the differences to be highlighted.  Maybe you can illustrate different data combinations and desired results? To start, @bowesmana's formula outputs a line when any field is different; there can be one, two, or three fields that are different. (Also thanks for a great demonstration of the append option in inputlookup!)  Let me start with an example. lookup_A.csv lookup_B.csv Hostname,IP,OS splunk.com,10.0.0.1,MacOS youtube.com,10.0.0.2,Linux google.com,10.0.0.3,Windows infoseek.com,10.0.0.5,Solaris yahoo.com,10.0.0.4,AIX Hostname,IP,OS splunk.com,10.0.0.1,MacOS youtube.com,10.0.0.2,Linux google.com,10.0.0.8,Windows yahoo.com,10.0.0.4,Windows Here, I only illustrated two variations.  There can be more.  Specifically, I didn't make variance in Hostname.  But I will use it to anchor other variants.  If Hostname is also variant, the following formula will still work if you anchor on Hostname; if you anchor on another field, the answer will be rather different depending on other choices you may make. To highlight differences anchored on Hostname (i.e., based on the assumption that hostname is unique), you can do     | inputlookup lookup_A.csv | eval origin = "A" | inputlookup append=t lookup_B.csv | eval origin = coalesce(origin, "B") | stats dc(origin) as originCount values(origin) as origins by Hostname IP OS | where originCount=1 | fields - originCount | stats list(*) as * by Hostname | foreach IP OS ``` anchor on Hostname, seek variance in IP, OS ``` [eval <<FIELD>> = if(mvindex(<<FIELD>>, 0) == mvindex(<<FIELD>>, 1), mvindex(<<FIELD>>, 0), mvzip(origins, <<FIELD>>, ":"))] | fields - origins     The above sample data will give Hostname IP OS google.com A:10.0.0.3 B:10.0.0.8 Windows infoseek.com A:10.0.0.5 A:Solaris yahoo.com 10.0.0.4 A:AIX B:Windows Is this something you could use?
We have Splunk Heavy Forwarder running in a couple of different regions/accounts in AWS. We need to ingest the CloudWatch Logs into Splunk Heavy Forwarder. And the architecture proposed is as follow... See more...
We have Splunk Heavy Forwarder running in a couple of different regions/accounts in AWS. We need to ingest the CloudWatch Logs into Splunk Heavy Forwarder. And the architecture proposed is as follows CloudWatch Logs (multiple accounts) >> Near-real time streaming through KDF >> S3 Bucket (Centralized bucket) >> (SQS) >> Splunk Heavy Forwarder. We are looking for a implementation document mainly for aggregating CloudWatch logs to S3 (from multiple accounts) and to improve the architecture. Direct ingestion from CloudWatch logs or KDF to Splunk is not preferred.  S3 centralized logging is preferred. We would like to reduce management overhead (hence don't prefer managing lambdas unless we have to), and also be cost effective. Kindly include implementation documentation if available.
Hello ,  i am new in Splunk and need help i get every week a vulnerability scan log with 2 main fields: "extracted_Host" and "Risk"  Risk values are: Critical, High and Medium (in the log is of... See more...
Hello ,  i am new in Splunk and need help i get every week a vulnerability scan log with 2 main fields: "extracted_Host" and "Risk"  Risk values are: Critical, High and Medium (in the log is often Medium so i must only search for Risk Medium and everything else is excluded) Extracted_Host: i get many different Host IP  I must filter which Host get which Risk (Hosts can have multiple Risk values) and what risk is falling away on which date and what risk is new  right now i am here:  Problem is i get only one host with all value fields and not how many Risk classification are really on this Host without any Time  index=nessus Risk IN (Critical,High,Medium) | fields extracted_Host Risk | eval Host=coalesce(extracted_Host,Risk,) | stats values(*) as * by Host thanks for the help  
Hi @krish1733  Pls check the Splunk Documentation for this topic: https://docs.splunk.com/Documentation/Splunk/9.1.1/Search/Specifytimemodifiersinyoursearch Splunk provides many options to specify... See more...
Hi @krish1733  Pls check the Splunk Documentation for this topic: https://docs.splunk.com/Documentation/Splunk/9.1.1/Search/Specifytimemodifiersinyoursearch Splunk provides many options to specify these times.. for example, you can relatively calculate these times..  you can use a subsearch for calculating these times and pass it to main search. let us know more info about your requirements, so we can suggest you best ideas/solutions.  As you are a new member, i would like to suggest you.. the karma points / upvotes are appreciated. if any post solves your question, please "accept that as the solution".. so the question will move out of unanswered queue, also it will help those who help you, thanks. 
Is there a reference document that helps us identify the number of CPU cores vs. concurrent searches that can be run. We want to take this back to security folks to see if there is an opportunity to... See more...
Is there a reference document that helps us identify the number of CPU cores vs. concurrent searches that can be run. We want to take this back to security folks to see if there is an opportunity to optimize the current underutilized instances (single digit CPU%), and thereby reduce costs.
Thanks a lot for your quick help and support , Query is working as expected.    
earliest and latest are search terms, not commands, remove the pipe '|' which separates command in the search.
Yeah, you can't do that. Each "row" is an event, a stats event. You can't split the event part way through. You would need to create a new event e.g. would become  
Hi @a2my12 ...we may need moooore details from you actually.. May we know if you have installed the addon/apps for email (is it MS Office 365?) did you install recently or long ago.. i mean, the ema... See more...
Hi @a2my12 ...we may need moooore details from you actually.. May we know if you have installed the addon/apps for email (is it MS Office 365?) did you install recently or long ago.. i mean, the emails are ingested already for sometime ago?  
The app uses /{splunk_home}/etc/auth/cacert.pem rather than any certifi library cacert.pem
Just wanting to know if there is a way that i check in one of the fields whether an email containing a malware has been deleted or whether it is still in the inbox?
we are getting these below error although splunk is up and running and configuration is also good 0-03-2023 08:04:43.963 -0400 ERROR TcpOutputFd [5866 TcpOutEloop] - Connection to host=10.246... See more...
we are getting these below error although splunk is up and running and configuration is also good 0-03-2023 08:04:43.963 -0400 ERROR TcpOutputFd [5866 TcpOutEloop] - Connection to host=10.246.250.154:9998 failed   10-04-2023 08:02:47.688 -0400 WARN TcpOutputFd [3703313 TcpOutEloop] - Connect to 10.246.250.155:9998 failed. No route to host 10-04-2023 08:02:47.750 -0400 WARN TcpOutputFd [3703313 TcpOutEloop] - Connect to 10.246.250.156:9998 failed. No route to host