All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I would like to perform coloring in mindmidmax based on each column value. However, the column is dynamic, it is quit difficult for me to perform coloring by each column. I will end up need manual ma... See more...
I would like to perform coloring in mindmidmax based on each column value. However, the column is dynamic, it is quit difficult for me to perform coloring by each column. I will end up need manual maintenance. I <format type="color" field="column A"> <colorPalette type="minMidMax" maxColor="#53A051" minColor="#FFFFFF"></colorPalette> <scale type="minMidMax"></scale> </format> <format type="color" field="column C"> <colorPalette type="minMidMax" maxColor="#53A051" minColor="#FFFFFF"></colorPalette> <scale type="minMidMax"></scale> </format>   If i remove the field, it did perform the coloring for me. However, minmidmax is by whole table and not by each column <format type="color" > <colorPalette type="minMidMax" maxColor="#53A051" minColor="#FFFFFF"></colorPalette> <scale type="minMidMax"></scale> </format>   Is there any method for me to perform minmidmax for all dynamic column i have made without manual maintenance effort. 
Hi Team, I am trying to run a search and get the searchId, I will use this searchId later to fetch the results.       curl --location --request POST 'https://splunkcloud.com:<port>/services... See more...
Hi Team, I am trying to run a search and get the searchId, I will use this searchId later to fetch the results.       curl --location --request POST 'https://splunkcloud.com:<port>/services/search/jobs?output_mode=json' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer JWT' \ --data-raw 'search=search |rest /servicesNS/-/-/saved/searches/ splunk_server=local | rename eai:* as * | rename acl.* as *|search app=*| table triggered_alert_count, title       I am getting the SID and  doing get       https://splunkcloud.com:<port>/services/search/jobs/<SID>/results?output_mode=json        I am getting the error as below       { "messages": [ { "type": "FATAL", "text": "Error in 'rest' command: This command must be the first command of a search." } ] }          This works fine for normal searches, but not for searches thats starts with |rest. Let me know why rest is not taking | even after adding it.
  Hi folks. I'm attempting to run Splunk in a docker container.  Or rather, I have that working - it was pretty easy with docker-compose based on https://splunk.github.io/docker-splunk/EXAMPLES.h... See more...
  Hi folks. I'm attempting to run Splunk in a docker container.  Or rather, I have that working - it was pretty easy with docker-compose based on https://splunk.github.io/docker-splunk/EXAMPLES.html#create-standalone-from-compose However, I want to create an index automatically, when the container first starts up.  This I'm finding difficult. I've tried a variety of methods, but they all failed in one way or another: yum and dnf are missing from the container, and microdnf appears to be broken.  This makes it difficult to customize the container's configuration. The container so configured appears to be based on RHEL, and we don't have any RHEL entitlements.  This too makes it difficult to customize the container's behavior. I tried setting up a volume and adding a script that would start splunk and shortly thereafter add the index, but I found that Splunk was missing lots of config files this way.  This may or may not be due to my relative inexperience with docker. I invoked the script with the following in docker-compose.yml: entrypoint: /bin/bash command: /spunk-files/start   I needed to copy these files, which I didn't have to copy before the entrypoint+command change: $SPLUNK_HOME/etc/splunk-launch.conf $SPLUNK_HOME/etc/splunk.version I also needed to create some logging directories, otherwise Splunk would fail to start. One of my favorite troubleshooting techniques, using a system call tracer like "strace", wasn't working because I couldn't install it - see above under microdnf. Does anyone know of a good way to auto-create a Splunk index at container creation time, without an RHEL entitlement? Thanks!  
I want to forward logs in CEF format from Splunk to a 3rd party system over TCP. To achieve this, I'm using Splunk app for CEF. I went through the steps (Select Data, Map Fields, Create Static Fields... See more...
I want to forward logs in CEF format from Splunk to a 3rd party system over TCP. To achieve this, I'm using Splunk app for CEF. I went through the steps (Select Data, Map Fields, Create Static Fields, Define Output Groups, Save Search) but at the Save search step when i click Next to go to the next step i get the following error:  I tried the generated query in the search & it's working fine. I tried reinstalling the app but the error is still the same. Appreciate any help I can get. Also i'm open to alternative methods to forward alerts in CEF format from Splunk to external systems over TCP.
Hi there,  So, I have table with Server Names and their load values     Server Load capacity G1 10 G1 80 G2 6 G2 25 G1 50 G3 15 G2 5 G4 20 G5 3... See more...
Hi there,  So, I have table with Server Names and their load values     Server Load capacity G1 10 G1 80 G2 6 G2 25 G1 50 G3 15 G2 5 G4 20 G5 30 and so on...     Is there a way to get sum of top 3 fields by Server? I can do that if I limit it to just one server by: my search | search "Server"="G1" |  sort- Load | head 3  | stats sum(Load) But I want to know for all servers to see which one is getting highest loads on average.
We have merged with another company that has a Splunk cluster in AWS. They would like to extend services to other environments in AWS. Instead of routing to the other environments by connecting the S... See more...
We have merged with another company that has a Splunk cluster in AWS. They would like to extend services to other environments in AWS. Instead of routing to the other environments by connecting the Splunk VPC to the other VPCs using transit gateways, I would like to put the indexers behind a network load balancer and use AWS privatelink.  Privatelink requires putting a NLB [network load balancer] in front of the cluster and configuring them as targets. The reciever builds an endpoint service in the VPC that assigns local address that can be hit without routing. The DNS name for the service must be made to resolve to the local address by creating a hosted zone in the Route 53. So for example if the local VPC of the log sender is 10.1.1.0/24 and the name is splink.cluster.com PrivateLink will use and IP address in the 10.1.1.0/24 range and splunk.cluster.com will resolve to that IP address. I have read that you must be able to resolve multiple IP address for that name. I have asked my AWS representative to investigate of this would work and he told me that other users are designing access this way.  There are 5 indexers spread across 3 availability zones. The domain controllers that want to send the logs will be using UF to send the logs. The advantage of using PrivateLink is so that we can provide access to the Spunk across different VPCs and organizations without having to open up cidr block access and filtering access with Security Groups and NACLs.
Hello,   I am trying write a query to  identify if any Splunk notable rule triggers with change in Urgency (i.e. from medium to high).Cloud any one please  help me in building  the query?
Via Sigma (rule format for SIEM's) converters, it is possible to convert Sigma rules to Splunk queries.  This is a well established process and can be done through tools like: https://github.com/Si... See more...
Via Sigma (rule format for SIEM's) converters, it is possible to convert Sigma rules to Splunk queries.  This is a well established process and can be done through tools like: https://github.com/SigmaHQ/pySigma  or https://github.com/SigmaHQ/sigma My question is, is there any way to do the reverse? Is there a way to convert Splunk queries into Sigma Rules?
I am hoping you could help me out with this query, as I am quite stuck. I want to be able to retrieve the name of the server that acts as a provider and the name of the server that acts as a consume... See more...
I am hoping you could help me out with this query, as I am quite stuck. I want to be able to retrieve the name of the server that acts as a provider and the name of the server that acts as a consumer.  The way you could check this is a log has a ConsumerId that equals the ID of the other server. For instance, here are two logs: ServerName="Server1", ID="1", IDConsumer=null ServerName="Server2", ID="2" , IDConsumer="1"  And what I want to retrieve is a table like this: To              From         IDConsumer   IDProvider Server1  Server2    1                          2   Appreciate a lot!
Hello Community,   I'm working on a search for a dashboard panel and I need some help. I'm looking to get the owner, search_name, status_label, and the last comment. The search I have so far is b... See more...
Hello Community,   I'm working on a search for a dashboard panel and I need some help. I'm looking to get the owner, search_name, status_label, and the last comment. The search I have so far is below: `notable` | where owner =="User1" OR owner=="User2" OR owner=="User3" OR owner=="User4" OR owner=="User5" OR owner=="User6" | where status_label=="Ready for Review" OR status_label=="Closed: False Positive" OR status_label=="Pending" OR status_label=="Closed: Valid - Remediated" | stats earliest(owner) AS Analyst, earliest(search_name) AS "Alert Name", latest(status_label) AS Status, latest(comment) AS Summary
I am currently using a bar chart visualization but I need to sort the bars by descending order.  I can't use a simple  chart count by EVNTSEVCAT | sort -count  because the SEVCAT field contains mu... See more...
I am currently using a bar chart visualization but I need to sort the bars by descending order.  I can't use a simple  chart count by EVNTSEVCAT | sort -count  because the SEVCAT field contains multiple values and we only need I,II, and III. below is my query         search * | eval CATI = if(SEVCAT=="I", 1,0) | eval CATII = if(SEVCAT=="II", 1,0) | eval CATIII = if(SEVCAT=="III", 1,0) | chart sum(CATI) sum(CATII) sum(CATIII) | transpose           The visualization:   I need the visualization to be sorted in descending order. Any suggestions help :-).   Thank you, Marco
Have a installation issue.  I am trying to upgrade from Splunk 8.0.5 to 8.2.4.     Here are the errors Im receiving:   splunk-8.2.4-87e2dda940d1-linux-2.6-x86_64.rpm: Header V4 RSA/SHA512 Signatu... See more...
Have a installation issue.  I am trying to upgrade from Splunk 8.0.5 to 8.2.4.     Here are the errors Im receiving:   splunk-8.2.4-87e2dda940d1-linux-2.6-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID b3cd4420: NOKEY   ^Cerror: can't create transaction lock on /var/lib/rpm/.rpm.lock (Interrupted system call)   The installation just hang at this point, have to back out of it.  Ive stopped the splunk processes (splunkd)   Have any idea of whats going on?  
This is a log example:  2022-04-19 11:33:41 Local1.Info 10.0.6.1 Apr 19 12:34:20 FireboxM470_HA2 801002AA8CC3A FireboxM471_HA (2022-04-19T18:34:20) firewall: msg_id="3000-0151" Allow Firebox Extern... See more...
This is a log example:  2022-04-19 11:33:41 Local1.Info 10.0.6.1 Apr 19 12:34:20 FireboxM470_HA2 801002AA8CC3A FireboxM471_HA (2022-04-19T18:34:20) firewall: msg_id="3000-0151" Allow Firebox External-H udp 206.131.15.124 78.243.26.213 2267 53 geo_src="USA" geo_dst="USA" duration="36" sent_bytes="105" rcvd_bytes="121" (Any From Firebox-00) I need to extract the src_ip (206.131.15.124 ) and the dst_ip (78.243.26.213).  Splunk do not create a proper regex by itself, no matter how many examples I give. I am looking for a regex that matches the 2nd IP in the log, and another one for the 3rd one.  Till now, I have done this: "(\d{1,3}\.){3}\d{1,3}", wich matches the 3 IPs, but I don´t know how to select one of them. And this: "(tcp|udp)\s((\d{1,3}\.){3}\d{1,3})"  wich returns the second IP with the protocol, don't know how to remove the protocol and the space.  Does anyone knows how to extract those fields as new fields?
Below is my raw logs. I want to extract "analystVerdict" & its corresponding result from raw logs. can someone please help   \"mitigationStartedAt\": \"2022-04-13T03:57:58.393000Z\", \"status\"... See more...
Below is my raw logs. I want to extract "analystVerdict" & its corresponding result from raw logs. can someone please help   \"mitigationStartedAt\": \"2022-04-13T03:57:58.393000Z\", \"status\": \"success\"}], \"threatInfo\": {\"analystVerdict\": \"false_positive\", \"analystVerdictDescription\": \"False positive\", \"automaticallyResolved\": false, \"browserType\": null, \"certificateId\": \"\", \"classification\": \"Malware\",   I tried below. But i am failing to get the result index=test_summary  | rex field=_raw ":\\\"(?<analystVerdict>\w+)\\\"" |table search_name analystVerdict
Hi, I have an index with one field as a timestamp, "SESSION_TIME", and another field, "SEQUENCE". The "SEQUENCE" field is unique for each event and i am tasked to replace the seconds part of each t... See more...
Hi, I have an index with one field as a timestamp, "SESSION_TIME", and another field, "SEQUENCE". The "SEQUENCE" field is unique for each event and i am tasked to replace the seconds part of each timestamp with the respective "SEQUENCE" number. This is what I currently wrote but I clearly wrote it incorrect: eval xxx = strftime(SESSION_TIMESTAMP,"%S" = "SEQUENCE") Can you please help? Thanks, Patrick
Hello, I have a tricky question. I'm trying to count tickets by providers we have. I am using the parent and subtasks to check to which team we are sending a subtask + using the service to know t... See more...
Hello, I have a tricky question. I'm trying to count tickets by providers we have. I am using the parent and subtasks to check to which team we are sending a subtask + using the service to know the provider. I'm stuck in cases like these ones. 3 events which are - the parent task .1 with no to_team and no provider - subtask 1 with one to_team - subtask num 2 with a provider (different to the team above) Now I have the three of them counted as Provider1 (subtask num1), Provider2 (subtask number 2) and Other (parent task). However, what I need is to avoid counting the parent task if there's a subtask with the needed information.   There are some parent task with no information that have to be in "Other" section because they need to be counted, but just when there's no subtask attached. Is it possible? I have tried subsearches but I cannot achieve one that works.   Thank you in advance.
I have a multisite cluster with 3 sites . Which is having 6 indexers as peer nodes clustered across the 3-sites (2 indexers each) managed by a manager node . Also we have 2 SHs clusters across the 3-... See more...
I have a multisite cluster with 3 sites . Which is having 6 indexers as peer nodes clustered across the 3-sites (2 indexers each) managed by a manager node . Also we have 2 SHs clusters across the 3-sites .  SHcluster1 - total 9 SHs (we kept 3 SHs in each site)  SHcluster2- total 6 SHs (we kept 2 SHs in each site)  so wanted to understand how the configuration is going to be in deployer, each SHs ,manager node as this will be a multisite cluster . As per my knowledge for multisite cluster - for single SHs config is Configure the search heads ----------------------------------- sudo ./splunk edit cluster-config -mode searchhead -site site1 -manager_uri <URI>:<mngmtPort> -secret <secretkey> so likewise what will be the configuration for SH clusters in multisite .
Hello, Since some domain e-mail changes in the company, I ended up having different users in splunk.com (here in this community) Is there any way I can get rid of the old ones but having all the in... See more...
Hello, Since some domain e-mail changes in the company, I ended up having different users in splunk.com (here in this community) Is there any way I can get rid of the old ones but having all the information in the latest and newest created profile? i.e. mail1 user created in 2018 + mail1 user created last week, and I'm the same person for both profiles. The only change is the e-mail Thanks
  Hi, We have event with time field  Time=1650461136000 Props configuration parsing the time into  _time: 2022-04-20 16:25:36 _indextime: 04/20/2022 16:22:43 [props] TIME_PREFIX = ,\Time\=... See more...
  Hi, We have event with time field  Time=1650461136000 Props configuration parsing the time into  _time: 2022-04-20 16:25:36 _indextime: 04/20/2022 16:22:43 [props] TIME_PREFIX = ,\Time\= TIME_FORMAT = %s%3N That means the data ingest with future time. With that being said, what are we missing?  Why we still receive the warning   "WARN Date ParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (350) characters of event. Defaulting to timestamp of previous event" Thank you!