All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can Threat Explorer items from Microsoft Defender for Office 365 appear in Splunk? My client wants to check if there is an attachment for sending and receiving mail. I registered the WindowsDefenderA... See more...
Can Threat Explorer items from Microsoft Defender for Office 365 appear in Splunk? My client wants to check if there is an attachment for sending and receiving mail. I registered the WindowsDefenderATP app in Azure AD, is this correct?
Hello Experts,  I have splink enterprise up with trial version installed.  The license group was trail license grou;p, I did get a license to bump up size for an year.  without changing the license... See more...
Hello Experts,  I have splink enterprise up with trial version installed.  The license group was trail license grou;p, I did get a license to bump up size for an year.  without changing the license server group, installed that license.  UI Asked to restart  the server and i did. however UI still reflects trial version which supports 500MB only.  if i change server group to enterprse and add license it says this license already installed But during selection of enterprise server group it says  "There are no valid Splunk Enterprise licenses installed. You will be prompted to install a license if you choose this option." What is the resolution ? Should I  delete license and change group ?       
I want to use the values() function because I want to group by fields. If I just use count by I get the correct result but it doesn't look nice. If I use the values function the counts get swapped. ... See more...
I want to use the values() function because I want to group by fields. If I just use count by I get the correct result but it doesn't look nice. If I use the values function the counts get swapped. this is how count by returns the results:  Function                  |    Status  |  count Authentication     |     Pass     |    10 Authentication     |     Fail       |      3 this is how the values() returns the results: Function                  |    Status  |  count Authentication     |     Pass     |    3                                     |     Fail       |     10 Here is the count by search:  | stats count by Function,  Status | table Function, Status, count Here is the values search: | stats count by Function,  Status | stats values(Status) as Status, values(count) as Count by Function | table Function, Status, Count So my question is how do I group by Function while getting the correct counts for the status.  
Hi team, I downloaded "IBM Resilient/SOAR Splunk Add-on", restarted Splunk. Then I entered the information, I'm sure the IP and organization information is correct, we did it this way in another in... See more...
Hi team, I downloaded "IBM Resilient/SOAR Splunk Add-on", restarted Splunk. Then I entered the information, I'm sure the IP and organization information is correct, we did it this way in another integration. I just created API Key and Secret via IBM Resilient, I entered them as well. But I still encounter the error seen in the pop-up in the screenshot below, has anyone encountered this before? Thanks  
Hi All, I want to monitor files which keeps changing the filename according to the current date falling under respective month and year directory. Can anyone please help me out how can we monitor t... See more...
Hi All, I want to monitor files which keeps changing the filename according to the current date falling under respective month and year directory. Can anyone please help me out how can we monitor the same. I tried using wild card in the inputs.conf, but it seems to be not working. Format: D:\Logs\<dynamic-year>\<dynamic-month>\<dynamic-date>.txt D:\Logs\2022\04\21042022.txt I used the below config under inputs.conf [monitor://D:\Logs\*\*\*.txt] disabled = true crcSalt = <SOURCE> index = indexname sourcetype = sourcetypename   Many Thanks in Advance!   
Hi Folks, Need help to understand the requirement of "api-user" (Controller local User) with administrative rights for auto instrumentation using cluster agent on EKS. We have installed the cluster ... See more...
Hi Folks, Need help to understand the requirement of "api-user" (Controller local User) with administrative rights for auto instrumentation using cluster agent on EKS. We have installed the cluster agent successfully into our EKS cluster and it is reporting data properly, now we are planning to achieve auto instrumentation of all the containers/pods running. While going through the documentation I found that there is a requirement to create a local user with an administrator role. I don't want to provide a local user with admin rights to the application team due to security concerns, Kindly suggest what else we can do here. Also, why AppDynamics is not using "API Client" token-based authentication instead of the user? Reference documentation: https://docs.appdynamics.com/21.4/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/auto-instrument-applications-with-the-cluster-agent
The following search does not produce any results: index=* earliest="04/19/2022:15:00:00" latest="04/19/2022:17:00:00" | fields _time, index, sourcetype | head 1 | eval mail=[| makeresults | eval... See more...
The following search does not produce any results: index=* earliest="04/19/2022:15:00:00" latest="04/19/2022:17:00:00" | fields _time, index, sourcetype | head 1 | eval mail=[| makeresults | eval mail="\"abc@cde.com\"" | return $mail] The search and the sub search produce each one result. The search returns one result as expected when the  the earliest and latest option in the base search are omitted.
Hi Team,     There is a column formatting (table) to highlight colours for the values..but i could see formatting says "String" instead of number tried updating in the code but still unable to do i... See more...
Hi Team,     There is a column formatting (table) to highlight colours for the values..but i could see formatting says "String" instead of number tried updating in the code but still unable to do it.. Could anyone please let me know how to change it to number from "string" to add the colours value.
Hello Community, How would I extract fields from raw data containing auto populated numbers in the fields I am trying to extract? The below example is field containing raw data. Notice the numbers ... See more...
Hello Community, How would I extract fields from raw data containing auto populated numbers in the fields I am trying to extract? The below example is field containing raw data. Notice the numbers inside the bracket.  The numbers are not the same for events and will auto change from 1 to 2 digits. For the below example, I would like to extract values for user_id, NAME, and Car. What would be the rex command? Event 1 for _raw field: user_id:[4] "peter1234" NAME:[10] "Peter" Car:[3] "Pinto" Event 2 for _raw field: user_id:[11] "peter1234" NAME:[5] "Peter" Car:[9] "Gremlin" Thank You for any assistance. Joe
Hi,   Can any one please help me with the query currently iam using " | rename * AS \|*\| "  but i don't want \  in the header  so i want  like  |PeriodDate| only. \|PeriodDate\| \|Vend... See more...
Hi,   Can any one please help me with the query currently iam using " | rename * AS \|*\| "  but i don't want \  in the header  so i want  like  |PeriodDate| only. \|PeriodDate\| \|VendorName\| \|ContractName\| \|Code\| |2021/12/19| |SAM| |HI-HI| |511| |2021/12/19| |SAM| |HI-HI| |51.1| |2021/12/19| |SAM| |HI-HI| |51.1|
I would like to perform coloring in mindmidmax based on each column value. However, the column is dynamic, it is quit difficult for me to perform coloring by each column. I will end up need manual ma... See more...
I would like to perform coloring in mindmidmax based on each column value. However, the column is dynamic, it is quit difficult for me to perform coloring by each column. I will end up need manual maintenance. I <format type="color" field="column A"> <colorPalette type="minMidMax" maxColor="#53A051" minColor="#FFFFFF"></colorPalette> <scale type="minMidMax"></scale> </format> <format type="color" field="column C"> <colorPalette type="minMidMax" maxColor="#53A051" minColor="#FFFFFF"></colorPalette> <scale type="minMidMax"></scale> </format>   If i remove the field, it did perform the coloring for me. However, minmidmax is by whole table and not by each column <format type="color" > <colorPalette type="minMidMax" maxColor="#53A051" minColor="#FFFFFF"></colorPalette> <scale type="minMidMax"></scale> </format>   Is there any method for me to perform minmidmax for all dynamic column i have made without manual maintenance effort. 
Hi Team, I am trying to run a search and get the searchId, I will use this searchId later to fetch the results.       curl --location --request POST 'https://splunkcloud.com:<port>/services... See more...
Hi Team, I am trying to run a search and get the searchId, I will use this searchId later to fetch the results.       curl --location --request POST 'https://splunkcloud.com:<port>/services/search/jobs?output_mode=json' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer JWT' \ --data-raw 'search=search |rest /servicesNS/-/-/saved/searches/ splunk_server=local | rename eai:* as * | rename acl.* as *|search app=*| table triggered_alert_count, title       I am getting the SID and  doing get       https://splunkcloud.com:<port>/services/search/jobs/<SID>/results?output_mode=json        I am getting the error as below       { "messages": [ { "type": "FATAL", "text": "Error in 'rest' command: This command must be the first command of a search." } ] }          This works fine for normal searches, but not for searches thats starts with |rest. Let me know why rest is not taking | even after adding it.
  Hi folks. I'm attempting to run Splunk in a docker container.  Or rather, I have that working - it was pretty easy with docker-compose based on https://splunk.github.io/docker-splunk/EXAMPLES.h... See more...
  Hi folks. I'm attempting to run Splunk in a docker container.  Or rather, I have that working - it was pretty easy with docker-compose based on https://splunk.github.io/docker-splunk/EXAMPLES.html#create-standalone-from-compose However, I want to create an index automatically, when the container first starts up.  This I'm finding difficult. I've tried a variety of methods, but they all failed in one way or another: yum and dnf are missing from the container, and microdnf appears to be broken.  This makes it difficult to customize the container's configuration. The container so configured appears to be based on RHEL, and we don't have any RHEL entitlements.  This too makes it difficult to customize the container's behavior. I tried setting up a volume and adding a script that would start splunk and shortly thereafter add the index, but I found that Splunk was missing lots of config files this way.  This may or may not be due to my relative inexperience with docker. I invoked the script with the following in docker-compose.yml: entrypoint: /bin/bash command: /spunk-files/start   I needed to copy these files, which I didn't have to copy before the entrypoint+command change: $SPLUNK_HOME/etc/splunk-launch.conf $SPLUNK_HOME/etc/splunk.version I also needed to create some logging directories, otherwise Splunk would fail to start. One of my favorite troubleshooting techniques, using a system call tracer like "strace", wasn't working because I couldn't install it - see above under microdnf. Does anyone know of a good way to auto-create a Splunk index at container creation time, without an RHEL entitlement? Thanks!  
I want to forward logs in CEF format from Splunk to a 3rd party system over TCP. To achieve this, I'm using Splunk app for CEF. I went through the steps (Select Data, Map Fields, Create Static Fields... See more...
I want to forward logs in CEF format from Splunk to a 3rd party system over TCP. To achieve this, I'm using Splunk app for CEF. I went through the steps (Select Data, Map Fields, Create Static Fields, Define Output Groups, Save Search) but at the Save search step when i click Next to go to the next step i get the following error:  I tried the generated query in the search & it's working fine. I tried reinstalling the app but the error is still the same. Appreciate any help I can get. Also i'm open to alternative methods to forward alerts in CEF format from Splunk to external systems over TCP.
Hi there,  So, I have table with Server Names and their load values     Server Load capacity G1 10 G1 80 G2 6 G2 25 G1 50 G3 15 G2 5 G4 20 G5 3... See more...
Hi there,  So, I have table with Server Names and their load values     Server Load capacity G1 10 G1 80 G2 6 G2 25 G1 50 G3 15 G2 5 G4 20 G5 30 and so on...     Is there a way to get sum of top 3 fields by Server? I can do that if I limit it to just one server by: my search | search "Server"="G1" |  sort- Load | head 3  | stats sum(Load) But I want to know for all servers to see which one is getting highest loads on average.
We have merged with another company that has a Splunk cluster in AWS. They would like to extend services to other environments in AWS. Instead of routing to the other environments by connecting the S... See more...
We have merged with another company that has a Splunk cluster in AWS. They would like to extend services to other environments in AWS. Instead of routing to the other environments by connecting the Splunk VPC to the other VPCs using transit gateways, I would like to put the indexers behind a network load balancer and use AWS privatelink.  Privatelink requires putting a NLB [network load balancer] in front of the cluster and configuring them as targets. The reciever builds an endpoint service in the VPC that assigns local address that can be hit without routing. The DNS name for the service must be made to resolve to the local address by creating a hosted zone in the Route 53. So for example if the local VPC of the log sender is 10.1.1.0/24 and the name is splink.cluster.com PrivateLink will use and IP address in the 10.1.1.0/24 range and splunk.cluster.com will resolve to that IP address. I have read that you must be able to resolve multiple IP address for that name. I have asked my AWS representative to investigate of this would work and he told me that other users are designing access this way.  There are 5 indexers spread across 3 availability zones. The domain controllers that want to send the logs will be using UF to send the logs. The advantage of using PrivateLink is so that we can provide access to the Spunk across different VPCs and organizations without having to open up cidr block access and filtering access with Security Groups and NACLs.
Hello,   I am trying write a query to  identify if any Splunk notable rule triggers with change in Urgency (i.e. from medium to high).Cloud any one please  help me in building  the query?
Via Sigma (rule format for SIEM's) converters, it is possible to convert Sigma rules to Splunk queries.  This is a well established process and can be done through tools like: https://github.com/Si... See more...
Via Sigma (rule format for SIEM's) converters, it is possible to convert Sigma rules to Splunk queries.  This is a well established process and can be done through tools like: https://github.com/SigmaHQ/pySigma  or https://github.com/SigmaHQ/sigma My question is, is there any way to do the reverse? Is there a way to convert Splunk queries into Sigma Rules?
I am hoping you could help me out with this query, as I am quite stuck. I want to be able to retrieve the name of the server that acts as a provider and the name of the server that acts as a consume... See more...
I am hoping you could help me out with this query, as I am quite stuck. I want to be able to retrieve the name of the server that acts as a provider and the name of the server that acts as a consumer.  The way you could check this is a log has a ConsumerId that equals the ID of the other server. For instance, here are two logs: ServerName="Server1", ID="1", IDConsumer=null ServerName="Server2", ID="2" , IDConsumer="1"  And what I want to retrieve is a table like this: To              From         IDConsumer   IDProvider Server1  Server2    1                          2   Appreciate a lot!