All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I want to create a real time alert of about 3000 Messages per secend. I want to create action for each message to create an http alert to another system. my problam is that when I tryed to do ... See more...
Hi, I want to create a real time alert of about 3000 Messages per secend. I want to create action for each message to create an http alert to another system. my problam is that when I tryed to do that i recived about 50-100 messages per second and i got a big delay. what is the best way to handle this throughput? can we use batch in teal time laert?
L.s., I want to get the latency from the input from a forwarder to an index. So whe use the app Meta_woot. It creates an inputlookup file meta-woot. In this file are the latest in-time and host name... See more...
L.s., I want to get the latency from the input from a forwarder to an index. So whe use the app Meta_woot. It creates an inputlookup file meta-woot. In this file are the latest in-time and host names and index names. So far so good. Next is to use this file for calculating  if a host is late or recent or delayed. Those searches are in the app and works fine. But i want a little extension, i want a table with the indexes as leading, and then calculate (by index) the percentage recent/late host, and sum it a one outcome (per index) So far the theory, now my tries. I used below serach. | inputlookup meta_woot where index=* | eval convert_late=(1440*60) | eval convert_delayed=(60*60) | eval last_time=(now()-recentTime) | eval last_time_indexed=case(last_time < convert_delayed, "Recent", last_time > convert_late, "Late", last_time > convert_delayed, "Delayed") | eval compliant_host=if(last_time_indexed="Recent", "1","0") | stats count(compliant_host) as chost by index, compliant_host This gives me a result where the outcome has split into indexname vs compliant_host and chost index compliant_host chost main 0 11 main 1 123 msad 1 6 nmon 1 5 openshift 1 1 temp_log 1 1 wineventlog 1 2   Now the question, how do i calculate the percentage for index main ( (123+11)/11) so i get an percentage value. How do i calculate with values after a stats command?? Pls help Thanx in advance greetz Jari
Hi, I updated my Splunk app in Splunkbase yesterday, receiving the correct Splunk AppInspect certification. However, when installing the app on a Splunk Cloud instance (from the "Browse More Apps")... See more...
Hi, I updated my Splunk app in Splunkbase yesterday, receiving the correct Splunk AppInspect certification. However, when installing the app on a Splunk Cloud instance (from the "Browse More Apps"), version 1.0.0 is installed instead of 1.0.2. The app also shows up as "Last Updated: an hour ago", but still installs the wrong version. I made sure to update the build in the [install] stanza of the app.conf file, as well as the versions in the app.conf and app.manifest. Any ideas?  
Hi, I just finished creating my Splunk App, however for some reason the app isn't available for Splunk Enterprise (only Splunk Cloud) - here is an image of the "compatibility" section in Splunkbase.... See more...
Hi, I just finished creating my Splunk App, however for some reason the app isn't available for Splunk Enterprise (only Splunk Cloud) - here is an image of the "compatibility" section in Splunkbase. When I try to find the app in the Enterprise App "Store" it isn't there. Something I thought might have happened  was that I didn't add the following section to the manifest. "platformRequirements": { "splunk": { "Enterprise": "*" } },  However even after adding it, the app still isn't available for Splunk Enterprise.. Has this happened to anyone?
Hello! Can anyone please lend a hand with this issue? I'm still fairly new to this and am working my way through Fundamentals 2. Scenario: Sophos Central antivirus clients installed on Linux and Wi... See more...
Hello! Can anyone please lend a hand with this issue? I'm still fairly new to this and am working my way through Fundamentals 2. Scenario: Sophos Central antivirus clients installed on Linux and Windows. Using a Heavy Forwarder to pull Sophos Central logs via an API into a dedicated antivirus index. These logs lack the product_version needed to populate the "Malware Operations - Clients By Product Version" dashboard panel. I've found the data I need in two places. A log entry in /var/log/messages which is being ingested to the unix index and looks like this: Jul 13 03:59:37 server-name savd[5860]: update.updated: Updated to versions - SAV: 10.5.2, Engine: 3.79.0, Data: 5.85 And in a file /opt/sophos-av/engine/suiteVersion which is readable by the splunk user and contains: 10.5.2.3790.203   I used the field extractor to create a regex string that matches the log entry and extracts the product_version. I've created a custom app with these files in it, pushed from the Deployment Server onto one host, and pushed from the Deployer to our Enterprise Security Search Head Cluster: props.conf [syslog] EXTRACT-date,time,host,process_name,product_version = ^(?P<date>\w+\s+\d+)\s+(?P<time>[^ ]+)\s+(?P<host>[^ ]+)\s+(?P<process_name>\w+)(?:[^ \n]* ){6}(?P<product_version>\SAV:\s\d*\.\d*\.\d*,\sEngine:\s\d*\.\d*\.\d*,\sData:\s\d*.\d*.+) eventtypes.conf [product_version] search=product_version=* tags.conf [eventtype=product_version] malware = enabled operations = enabled When I search the unix index I can see the product_version field in the search results and the tags. Questions: 1.) How do I copy these events into the antivirus index and is this necessary? (I'm thinking of efficiency as the cim_Malware_indexes macro contains the antivirus and firewall indexes which are much smaller than the unix index). 2.) How do I get the product_version to show in the "Clients By Product Version" panel which uses this search? | `malware_operations_tracker(time_product_version)` | search | stats dc(dest) by product_version | sort 10 - dc(dest) 3.) Is there a better way to do this? Any help is appreciated.
Hi have log like below: _time                                                source cpu_load_percent process pctCPU cpuTIME   PID 7/14/21 1:59:41.000 PM top          5.6                           ... See more...
Hi have log like below: _time                                                source cpu_load_percent process pctCPU cpuTIME   PID 7/14/21 1:59:41.000 PM top          5.6                                     java           5.6          1:49.46     125353 here is my SPL index="main" pctCPU="*" process="java" pctCPU>0 I have 3 java process that has uniq PID, Now I want to get timechart that show pctCPU of maximum PID. Any idea? Thanks  
I'm looking to enable Workload Management for Splunk and I'm just trying to understand whether this is fully supported on a deployment which is using cgroups v2. Based on the documentation, the word... See more...
I'm looking to enable Workload Management for Splunk and I'm just trying to understand whether this is fully supported on a deployment which is using cgroups v2. Based on the documentation, the wording on the Configure Linux systemd for workload management page suggests that Splunk understands cgroups v1: CPU: /sys/fs/cgroup/cpu/system.slice/<SPLUNK_SERVER_NAME>.service Memory: /sys/fs/cgroup/memory/system.slice/<SPLUNK_SERVER_NAME>.service But there is nothing (except the diagram) that suggests it can operate under cgroups v2 as well. Does anyone know whether Splunk fully supports cgroups v2, or will our deployment need to be cgroups v1? Edit: Some investigation suggests that Splunk fails to start with Workload Management enabled on a cgroups v2 host: Jul 15 10:30:50 hostnamehere splunk[16311]: Couldn't open dir /sys/fs/cgroup/cpu/: No such file or directory Jul 15 10:30:50 hostnamehere splunk[16311]: Error perform systemd preparation: No such file or directory
Hi community, I can get 2126 events in the past 7 days with the following statement. index=* "*Error Sending SMS : org.springframework.web.client.HttpServerErrorException: 500 Internal Server Err... See more...
Hi community, I can get 2126 events in the past 7 days with the following statement. index=* "*Error Sending SMS : org.springframework.web.client.HttpServerErrorException: 500 Internal Server Error*"     One of the traceid in the events is traceid: 3312e53cb50bfe4c. And with the traceid I get from the above events,  I enter the following statement in the search box and search. index=* 3312e53cb50bfe4c     What I want is 010d8aff-16a8-4f69-82ea-59484741432e under cf_app_name: user-profile-metadata-prod, which is the GUID I mentioned in the title. The question is that there are 2126 traceids, how do I get out these "GET /api/users/v2/<GUIDs> " from traceids ?  Best regards, Madoc
Hi, I need to download the Splunk Universal Forwarder Linux 32-bit for the version Splunk Enterprise 8.2.1,can you help me? Thanks in advance, Monica Corso
Hi. I have Splunk on windows network, and collecting data using UF from clients. I need to make a report for newly installed application on clients. I am searching for event id 11707 and also 1033... See more...
Hi. I have Splunk on windows network, and collecting data using UF from clients. I need to make a report for newly installed application on clients. I am searching for event id 11707 and also 1033, but it seems these event are being logged only if we use Windows Installer. For example, we installed Notepad++ on a client, and we do not have any event for that. Can someone please advise? Thank you.
Hi, I have Splunk on Windows network, and using UF for windows events. I am searching to detect users logon during specific hours:  index=main source="WinEventLog:Security"EventCode=528 OR EventCo... See more...
Hi, I have Splunk on Windows network, and using UF for windows events. I am searching to detect users logon during specific hours:  index=main source="WinEventLog:Security"EventCode=528 OR EventCode=540 OR EventCode=4624 |where Logon_Type!=3 OR (Logon_Type=3 AND NOT LIKE(host,"DC%")) | eval Signed_Account=mvindex (Account_Name,1) |eval hour=strftime(_time,"%H") | eval ShowTime=strftime(_time,"%D %H:%M") | search Signed_Account=TThemistokleous (hour>23 OR hour<6) | table host ShowTime Logon_Type Issue is, in result, I have for Same HOST, on Same TIME, 2 users signed on. AND Each signed on 4 times! Can someone please advise, what can be the issue? Thank you
Hi, We have around the world 10 data centers each dc have the firewall setup, servers, splunk indexer.   Headquarters have the search heads and are connected via search peers. Now we want to deplo... See more...
Hi, We have around the world 10 data centers each dc have the firewall setup, servers, splunk indexer.   Headquarters have the search heads and are connected via search peers. Now we want to deploy the fortigate Apps in our search heads and access all the firewall logs and view the dashboards.   Please suggest how we should keep the index name in each data center ? Can we keep the index name Fortigate for all the DC ? or we should keep unique name ?   Please suggest   
Hi Team, I have installed and utilizing the PingFederate application in our organization for few of our client servers. And now we want it to ingest the logs generated from these app into Splunk and... See more...
Hi Team, I have installed and utilizing the PingFederate application in our organization for few of our client servers. And now we want it to ingest the logs generated from these app into Splunk and utilize the Dashboards to view the statistics present in the Splunk Search head. So I have installed PingFederate App for Splunk (https://splunkbase.splunk.com/app/976/) in our Splunk Search heads. The PingFederate application are running in our client servers so I have logged into one of the client server where Ping Federate app has been installed and I can see that the Splunk Universal Forwarder (UF) has been already installed in the client server and it is reporting in Splunk. So now I have navigated to the directory in which PingFederate is installed and I can see the version we are using for PingFederate is 10.2.1 PingFederate: I have followed the documentation for PingFederate ( https://docs.pingidentity.com/bundle/pingfederate-93/page/qst1564002981075.html) and tried to setup in the client server. But I can see in the documentation we are having 5 Logger elements and I am quite not sure which one should I need to uncomment and which RollingFile should I need to uncomment in the log4j2.xml file? So kindly help on the same. And post uncommenting the required stanza should I need to restart the PingFederate service to consider into effect? Kindly help on the same. And  if the log file is generated in the log directory then what index and sourcetype information should I need to use? So that the dashboards which is present in the app should work as expected for both the Apps? Or if I missing out anything then kindly help to correct me on the same as well.
Hi Team, I have installed and utilizing the PingAccess application in our organization for few of our client servers. And now we want it to ingest the logs generated from these app into Splunk and u... See more...
Hi Team, I have installed and utilizing the PingAccess application in our organization for few of our client servers. And now we want it to ingest the logs generated from these app into Splunk and utilize the Dashboards to view the statistics present in the Splunk Search head. So I have installed PingAccess App for Splunk (https://splunkbase.splunk.com/app/5368/) in our Splunk Search heads. The PingAccess application are running in our client servers so I have logged into one of the client server where Ping Access app has been installed and I can see that the Splunk Universal Forwarder (UF) has been already installed in the client server and it is reporting in Splunk. So now I have navigated to the directory in which PingAccess is installed and I can see the version we are using for PingAccess is 6.2.0 Ping Access: So as per the documentation provided I have (https://docs.pingidentity.com/bundle/pingaccess-63/page/gyx1564006725145.html) followed the steps i.e. edited the log4j2.xml file and uncommented the below lines from the Stanza. <AppenderRef ref="ApiAudit2Splunk"/> <AppenderRef ref="EngineAudit2Splunk"/> <AppenderRef ref="AgentAudit2Splunk"/> But in the xml file I couldn’t able to find the below lines as mentioned in the document? <AppenderRef ref="SidebandClientAudit2Splunk"/> <AppenderRef ref="SidebandAudit2Splunk"/> So what should I need to do if these lines are missing in the xml file? Shall I skip it or should I need to include it? Kindly help on the same please. And post performing the steps do I need to restart the PingAccess service so that the respective log files would be generated in the log directory? Kindly confirm on this part as well. And if the log file is generated in their respective directory then I believe we need to ingest the below mentioned log files into Splunk and we already have the setup (Splunk universal forwarder) running in the server so we can just go ahead and use any of the index and sourcetype information or do we have anything specific index name along with sourcetype (For PingAccess App) which need to be created in Splunk and use them to ingest the logs ? Kindly confirm on the same as well. (Since we want the dashboards installed in the Splunk Search head should be showing up the statistics). • pingaccess_engine_audit_splunk.log • pingaccess_api_audit_splunk.log • pingaccess_agent_audit_splunk.log   So kindly help me on my query..
Hi All,  I have a dashboard with chart which is representing Count by Dates. I have increased the font size of count value but need help in increasing the font size of Date. Below XML I'm using curr... See more...
Hi All,  I have a dashboard with chart which is representing Count by Dates. I have increased the font size of count value but need help in increasing the font size of Date. Below XML I'm using currently.  <row> <panel> <html> <style> #test th{ font-size: 15px !important; font-weight: bold !important; } </style> </html> </panel> </row> Thanks  
Hello All, Sorry, if this has already been answered. I'm a beginner and looking for some help. I built a dashboard which has 3 columns.  Employee ID | File Name | Download report link (Make a call... See more...
Hello All, Sorry, if this has already been answered. I'm a beginner and looking for some help. I built a dashboard which has 3 columns.  Employee ID | File Name | Download report link (Make a call to external URL) When the user click on download report link button, I've to make a external webservice call which takes the filename as input and return the content/file. i was able to make the call and everything was working. However due to security concerns, now the API requires an authorization token (static value) to be sent as an HTTP Header.  I'm not able to make much progress on how to set this http header when making an external webservice call from Splunk {cloud} dashboard.   Thanks for your help.  
Has anyone ever set up a script to monitor ESTABLISHED sessions for Windows using a netstat command? I was looking to copy/modify the win_listening_ports.bat script that is part of the Splunk for Wi... See more...
Has anyone ever set up a script to monitor ESTABLISHED sessions for Windows using a netstat command? I was looking to copy/modify the win_listening_ports.bat script that is part of the Splunk for Windows app but am not having much luck. I want to gather local address:local port and foreign address:foreign port, can anyone help? BTW - I am aware of the WinNetMon inbound;outbound monitors that are part of the same Windows app. I don't want to capture all connections, rather see a snapshot at specified intervals, like once hourly. Thanks in advance!
I would like TestResult to give output "1" if there are "Pass" or "Completed" in Status and "0" if otherwise. How to change this query below to check for both strings? | eval TestResult=if(like(Stat... See more...
I would like TestResult to give output "1" if there are "Pass" or "Completed" in Status and "0" if otherwise. How to change this query below to check for both strings? | eval TestResult=if(like(Status, "%Completed%"), 1, 0)
How to send On-Prem Windows Defender AV DATA to On Splunk
Hi, I need to reinstall operation system for my computer installed splunk enterprise recently, and I want to backup the old data, then import it into the new reinstalled system, what should I do? Is ... See more...
Hi, I need to reinstall operation system for my computer installed splunk enterprise recently, and I want to backup the old data, then import it into the new reinstalled system, what should I do? Is there any tutorial?