All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm working to try and automate the creation of muting rules for our maintenance windows, I've been looking around to see if there is a way to use the API to create a muting rule, but I'm not finding... See more...
I'm working to try and automate the creation of muting rules for our maintenance windows, I've been looking around to see if there is a way to use the API to create a muting rule, but I'm not finding anything, does this not exist? Is there an existing integration with Service-Now that would do this that I'm just not finding? I'm hoping to tie into our change management system to have these muting windows created automatically upon approval. 
I cannot download the splunk Enterprise . Once i click on download all i download is a zip file with .tgz extension 
Hello everyone, I am planning to automate a process where we need to archive admin activity for splunk application. For that I would require a query to fetch all the privileged actions conducted by... See more...
Hello everyone, I am planning to automate a process where we need to archive admin activity for splunk application. For that I would require a query to fetch all the privileged actions conducted by admins inside splunk application. My first thought is to use the following query: index=_audit sourcetype="audittrial" action=edit* OR action=create* OR action=delete* OR action=restart* Unfortunately, this query is emitting a lot of data ( around 900MB per day ) which the platform that I am using for automation can´t work with.  => Is there maybe any query that I can use to get the data I need in a more specific way to the point where it reduces the size to 20 MB or something ? I would appreciate any help and thank you in advance !  
Good day,   I'm hoping someone smarter than me can help me figure this out. In the search below, I'm trying to correlate a set of events from two different indexes. IndexA has network switch connec... See more...
Good day,   I'm hoping someone smarter than me can help me figure this out. In the search below, I'm trying to correlate a set of events from two different indexes. IndexA has network switch connection logs, and IndexB has dhcp hostname mappings. I want to combine the information from both. IndexA has a unique SessionID value that I'm using to differeniate individual connection attempts, and I want to have my stats table summarize by this field only so I can see informatino per a connection attempt. Index B does not have this field, however. For reference, in the narrow time range I'm working within, there are only two SessionID's for the same MAC/IP address pair. Likewise, there's only a single set of unique DHCP logs. This was done deliberately by unplugging/plugging a computer into the network twice to generate two connection attempts while keeping the same DHCP information. Because the SessionID field is not in the second index, I ran the first stats command summarize my events on the two fields the two indexes did share: the MAC and IP Address. With the individual events now gone and everything summarized, I ran the second stats command to then summarize by the Session_ID. This does work except for one flaw. As stated above, there are only two Session_ID's contained within two events, each with their own _time field. Because I use the values() function of stats, both timestamps are printed as a multi-value field in each row of the stats table, and you're not able to tell which timestamp belongs to which Session_ID. I've tried different permutations of things, such as using mvexpand between the stats command to split the time_indexA(I'm not interested in charting the time for events from indexB) field back into individual events. I've also tried summarizing by time in the first stats command alongside the MAC/IP address. I attempted using eventstats as well, but it's not a command I'm very familiar with, so that didn't work either. And finally, when I do manage to make some progress with correlating each timestamp to its own event, so far, I've alwasy lost the hostname field from indexB as a byproduct. I've attached a picture of the table when run in case my explanation is subpar.   (index=indexA) OR (index=indexB) | rex field=text "AuditSessionID (?<SessionID>\w+)" | rex field=pair "session-id=(?<SessionID>\w+)" | eval time_{index}=strftime(_time,"%F %T") | eval ip_add=coalesce(IP_Address, assigned_ip), mac_add=coalesce(upper(src_mac), upper(mac)) | eval auth=case(CODE=45040, "True", true(), "False") | stats values(host_name) as hostname values(networkSwitch) as Switch values(switchPort) as Port values(auth) as Auth values(SessionID) as Session_ID values(time_indexA) as time by mac_add, ip_add | stats values(time) as time values(hostname) as hostname values(Switch) as Switch values(Port) as Port values(Auth) as Auth values(ip_add) as IP_Address values(mac_add) as MAC_Address by Session_ID  
TRAP Cloud has an API to export information, but there is no Add-On to integrate TRAP Cloud with Splunk Has anyone made this integration succesfully? Is there intention to implement a supported Add... See more...
TRAP Cloud has an API to export information, but there is no Add-On to integrate TRAP Cloud with Splunk Has anyone made this integration succesfully? Is there intention to implement a supported Add-On on Splunk to integrate TRAP Cloud?
Hi guys!  I've been struggeling for a while understanding metrics. When making a line chart for both average and max  value the trend is exact the same. This is the query: | mstats avg("% Proce... See more...
Hi guys!  I've been struggeling for a while understanding metrics. When making a line chart for both average and max  value the trend is exact the same. This is the query: | mstats avg("% Processor Time") as Avg, max("% Processor Time") as Max Where index="metric_index" AND collection=CPU AND host="host" span=1m | fields _time, Avg, Max But if I do avg and max of the value in the same time range I get two different values. Query used: | mstats avg("% Processor Time") as Avg, max("% Processor Time") as Max Where index="metric_index" AND "collection"="CPU" AND "host"="host" Earlier I had this data ingested as events and  then I had different trend for avg and max.. The inputs.conf file looks like this (using the Splunk_TA_windows app): ## CPU [perfmon://CPU] counters = % Processor Time disabled = 0 samplingInterval = 2000 stats = average; min; max instances = _Total interval = 60 mode = single object = Processor useEnglishOnly=true formatString = %.2f index = metric_index Are someone able to explain why this happens? Thanks in advance
Hello, I tried to configure SplunkForwarder but I got this error message. Ple "Unable to initialize modular input”upload_pcap” defined in the app “splunk_app_stream.” Introspecting scheme=upload_pc... See more...
Hello, I tried to configure SplunkForwarder but I got this error message. Ple "Unable to initialize modular input”upload_pcap” defined in the app “splunk_app_stream.” Introspecting scheme=upload_pcap:script running failed (exited with code 1)"
Hi, i want to move a file from a client into Deployment Server via Search Head. I was thinking of something like  | makeresults | eval content="the content of text file that need to be sent over... See more...
Hi, i want to move a file from a client into Deployment Server via Search Head. I was thinking of something like  | makeresults | eval content="the content of text file that need to be sent over to DS." | search [ | rest splunk_server=ds /services/search/jobs search="| outputlookup test.csv" ]   but it seems that the rest command does not support anything except search (it does not work with pipes after search either),  but it is not the same using rest from cli or rest queries from outside Splunk.  Since it would be a challenge to store credentials in an app protected, doing it using script or cli would be my last option. Doing it using the web interface would be better for further development. Thanks
I've spent hours and hours trying to figure out how to do what I thought would be very easy. There are examples of how to use the Pie chart on the Examples tab of this page: https://splunkui.splunk... See more...
I've spent hours and hours trying to figure out how to do what I thought would be very easy. There are examples of how to use the Pie chart on the Examples tab of this page: https://splunkui.splunk.com/Packages/visualizations/?path=%2FPie They all work great and have an option to show the code.  None of them do anything with the point.click event. On the Events tab, it shows the data changing when you click a wedge, but doesn't have an option to show the code.  How is it doing what it's doing? How do you wire up the event to fire your own code?  I started with the first example on the Examples tab, then tried to add a handler.  I've tried onClick, pointClick, onPointClick, onPressed, tried passing it in through options and as a last resort, tried every idea ChatGPT could come up with.  It finally gave up and suggested I ask here. If I wrap the Pie in a div and add an onClick handler to the div, I can capture the event and see which wedge was clicked.  But that seems messy and not how it's intended to be used. Anybody got any ideas on how to get this wired up? Just seeing what the code looks like that's running on the Events tab would do the trick.
Hello, I am using multiple tokens on a dashboard created in Dashboard Studio and have added default values for the same in json source. When I save and reload the dashboard, these default values fo... See more...
Hello, I am using multiple tokens on a dashboard created in Dashboard Studio and have added default values for the same in json source. When I save and reload the dashboard, these default values for tokens are removed and table viz (using these tokens) is showing as blank Can you please suggest how can I fix this? Thank you.   "defaults": {          "dataSources": {              "ds.search": {                  "options": {                      "queryParameters": {}                  }              }          },          "tokens": {              "default": {                  "tokStatus": {                      "value": "Status=\"*\""                  },                  "tokPlannedStartDt": {                      "value": "LIKE(isPlannedStartDtPassed,\"%\")"                  },                  "tokPlannedEndDt": {                      "value": "LIKE(isPlannedEndDtPassed,\"%\")"                  }              }          }      }
Hello,  I tasked to create dashboard on Splunk that shows Check Point Firewall system information. Is there any way that Splunk can display Check Point Firewall system information? Such as CPU, me... See more...
Hello,  I tasked to create dashboard on Splunk that shows Check Point Firewall system information. Is there any way that Splunk can display Check Point Firewall system information? Such as CPU, memory, traffic rate also hardware sensors (if possible). We use Check Point app for Splunk. But looks like it has no information related to system info. I don't even know if Check Point Firewall creates logs of this kind of information. Below screenshots are from Smart Console.  Splunk Enterprise 9.3.1 Check Point Firewall R81.20 Any ideas would be appreciated.    
I noticed my iPad is getting data and I didn’t open a splunk account. How do I figure out where it is going?
Hi, I am trying to install Splunk version 0.116.0 in an EKS cluster but getting error in operator pod: webhook \minstrumentation.kb.io\ : failed to call webhook: Post \"https://splunk-otel-collec... See more...
Hi, I am trying to install Splunk version 0.116.0 in an EKS cluster but getting error in operator pod: webhook \minstrumentation.kb.io\ : failed to call webhook: Post \"https://splunk-otel-collector-operator-webhook.namespace.svc:443/mutate-opentelemetry-io-v1aplha1-instrumentation?timeout=10s: no endpoints available of service"\ . I am unable to understand what is going on here. I have been following the latest docs on 0.116.0 onwards install guide. I had no problem on 0.113.0. The doc does mention to see a similar error, but there was no race condition while enabling operator. Pods are running successfully but operator pod logs show the above error. Thanks Divya
I am not sure where to even start on this one.    I have 2 log file types I need to extract data to get final accounts. I need to combine by objectClasses so that when on a given day "ial to enforc... See more...
I am not sure where to even start on this one.    I have 2 log file types I need to extract data to get final accounts. I need to combine by objectClasses so that when on a given day "ial to enforce" in log Type 2 is sets the count for number of Type 1 events. I need to run this over a year. Thank you in advance!!!!  -----Type 1 2025-01-01 00:00:00,125 trackingid="tid:13256464"message='{"UserAccessSubmission":{"uuid":"abc123","mail":"sean@southpark.net","trackingId":"tid:13256464","objectClass":"cartmanUser","csp":"Butters"}}' 2025-01-01 00:01:00,125 trackingid="tid:13256464"message='{"UserAccessSubmission":{"uuid":"abc123","mail":"sean@southpark.net","trackingId":"tid:13256464","objectClass":"cartmanUser","csp":"Butters"}}' 2025-01-02 00:01:00,125 trackingid="tid:13256464"message='{"UserAccessSubmission":{"uuid":"abc123","mail":"sean@southpark.net","trackingId":"tid:13256464","objectClass":"cartmanUser","csp":"Butters"}}' 2025-01-02 00:01:00,125 trackingid="tid:13256464"message='{"UserAccessSubmission":{"uuid":"abc123","mail":"sean@southpark.net","trackingId":"tid:13256464","objectClass":"StanUser","csp":"Butters"}}' 2025-01-02 00:01:00,125 trackingid="tid:13256464"message='{"UserAccessSubmission":{"uuid":"abc123","mail":"sean@southpark.net","trackingId":"tid:13256464","objectClass":"StanUser","csp":"Butters"}}'   ------- Type 2 { [-] @message: { [-] attributeContract: { [-] extendedAttributes: [ [-] ] maskOgnlValues: false uniqueUserKeyAttribute: uuid } attributeMapping: { [-] attributeContractFulfillment: { [-] uuid: { [-] source: { [-] type: ADAPTER } value: uuid } } attributeSources: [ [-] ] issuanceCriteria: { [-] conditionalCriteria: [ [-] ] } } configuration: { [-] fields: [ [-] { [-] name: Application ObjectClass value: cartmanUser } { [-] name: Application Entitlement Attribute value: cartmanRole } { [-] name: IAL to Enforce value: 2 } } id: Cartman name: Cartman } @timestamp: 2025-01-01T00:00:01.833685 }   { [-] @message: { [-] attributeContract: { [-] extendedAttributes: [ [-] ] maskOgnlValues: false uniqueUserKeyAttribute: uuid } attributeMapping: { [-] attributeContractFulfillment: { [-] uuid: { [-] source: { [-] type: ADAPTER } value: uuid } } attributeSources: [ [-] ] issuanceCriteria: { [-] conditionalCriteria: [ [-] ] } } configuration: { [-] fields: [ [-] { [-] name: Application ObjectClass value: cartmanUser } { [-] name: Application Entitlement Attribute value: cartmanRole } { [-] name: IAL to Enforce value: 1 } } id: Cartman name: Cartman } @timestamp: 2025-01-02T00:00:01.833685 }     The Goal would be to get something like this Table 1   Ial to enforce is 2 CartmanUser 2     Table 2   Ial to enforce is 1 CartmanUser 1  
Hello, I have set up a home lab with splunk. I have splunk enterprise on my admin Windows vm where i make all the changes, and a second Windows vm that is the "Victim".  Then i have an attack machin... See more...
Hello, I have set up a home lab with splunk. I have splunk enterprise on my admin Windows vm where i make all the changes, and a second Windows vm that is the "Victim".  Then i have an attack machine that is kali linux. all machines are on the same network, i can ping each machine each way. The idea is to simulate real world SOC experience.  I have splunk forwarder installed on the victim machine. I am forwarding all windows logs (Sytem, security, application, and setup).  I have ran multiple NMAP scans on the victim machine. I have forward the logs to an Index called "wineventlog". and my machine is called "Victim".  I have used the splunk guide on detecting port scanning and it yields no results, i have also use security essentials "internal horizontal scan". and it gives an error. Ive also checked and i am getting logs sent, i can see them in the index.  I have no idea why none of my searches are not working. I dont know where to begin, am i not getting the right data forwarded to splunk? am i not searching right? or have i missed a step?  Please note i have absolutely no experience with splunk. Im from an IT background so im not too useless but im absolutely lost when it comes to splunk.  Any helps or suggestions is much much much appreciated and needed. I am totally lost. Apologies if i have not provided enough information, can provide more if needed.  Pictures included in post.       
Hi there, Is it possible for the AppDynamics Node.JS packages to be updated to fully declare their dependencies? We use Yarn to manage Node.JS packages. Yarn is more strict than npm about package d... See more...
Hi there, Is it possible for the AppDynamics Node.JS packages to be updated to fully declare their dependencies? We use Yarn to manage Node.JS packages. Yarn is more strict than npm about package dependencies, and trips up when installing appdynamics version 24.10.0 because several sub-packages don't declare all of their dependencies. We can work around this by using package extensions in our Yarn config file (.yarnrc.yml): packageExtensions: "appdynamics-protobuf@*": dependencies: "https-proxy-agent": "*" "fs-extra": "*" "tar": "*" "appdynamics-libagent-napi@*": dependencies: "fs-extra": "*" "tar": "*" "appdynamics-native@*": dependencies: "https-proxy-agent": "*" "fs-extra": "*" "tar": "*" This is a point of friction, though, because it must be done in every repository. Examining the AppDynamics packages, I see they use a preinstall script similar to the following: npm install https-proxy-agent@5.0.0 fs-extra@2.0.0 tar@5.0.11 && node install.js appdynamics-libagent-napi-native appdynamics-libagent-napi 10391.0.0 This seems like the source of the undeclared package dependencies. As a side effect of declaring the dependencies, I believe the first command (npm install) could be removed from the preinstall script. This would also remove the reliance on npm, which will play better with non-npm projects: many projects don't use npm (e.g. we're using Yarn), and relying on npm in the preinstall script can cause issues that are difficult to troubleshoot.
hello we are unable to receive logs from forwarders from 29 january. i checked splund.log and found this error ERROR TcpOutputFd [110883 TcpOutEloop] - Connection to host=<ip>:port failed what sh... See more...
hello we are unable to receive logs from forwarders from 29 january. i checked splund.log and found this error ERROR TcpOutputFd [110883 TcpOutEloop] - Connection to host=<ip>:port failed what should I do?
I want to be able to support adaptive response action in Splunk Enterprise Security but when I put some value there Im getting the error, even its not empty  what I did wrong?         
I have string like this , {"code":"1234","bday":"15-02-06T07:02:01.731+00:00" "name":"Alex", "role":"student","age":"16"}, and I want to extract role from this string. Can any one suggest way in splu... See more...
I have string like this , {"code":"1234","bday":"15-02-06T07:02:01.731+00:00" "name":"Alex", "role":"student","age":"16"}, and I want to extract role from this string. Can any one suggest way in splunk logs? 
We are unable to access our splunk webserver post OS upgrade to RHEL 8.10, However our Splunk service is up and running fine but the UI is not available for us. Can someone please help us to fix this... See more...
We are unable to access our splunk webserver post OS upgrade to RHEL 8.10, However our Splunk service is up and running fine but the UI is not available for us. Can someone please help us to fix this issue. We have checked port 8000 is listening fine.  Trying 10.xxx.xxx.xx... Connected to 10.xxx.xxx.xx. Escape character is '^]'. ^Z Connection closed by foreign host. tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN We dont have anything in internal logs to dig in. we are getting the error message on the page as below: The connection has timed out The server at 10.xxx.xxx.xx is taking too long to respond.   The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer’s network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the web.