All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have Splunk on Windows network, and using UF for windows events. I am searching to detect users logon during specific hours:  index=main source="WinEventLog:Security"EventCode=528 OR EventCo... See more...
Hi, I have Splunk on Windows network, and using UF for windows events. I am searching to detect users logon during specific hours:  index=main source="WinEventLog:Security"EventCode=528 OR EventCode=540 OR EventCode=4624 |where Logon_Type!=3 OR (Logon_Type=3 AND NOT LIKE(host,"DC%")) | eval Signed_Account=mvindex (Account_Name,1) |eval hour=strftime(_time,"%H") | eval ShowTime=strftime(_time,"%D %H:%M") | search Signed_Account=TThemistokleous (hour>23 OR hour<6) | table host ShowTime Logon_Type Issue is, in result, I have for Same HOST, on Same TIME, 2 users signed on. AND Each signed on 4 times! Can someone please advise, what can be the issue? Thank you
Hi, We have around the world 10 data centers each dc have the firewall setup, servers, splunk indexer.   Headquarters have the search heads and are connected via search peers. Now we want to deplo... See more...
Hi, We have around the world 10 data centers each dc have the firewall setup, servers, splunk indexer.   Headquarters have the search heads and are connected via search peers. Now we want to deploy the fortigate Apps in our search heads and access all the firewall logs and view the dashboards.   Please suggest how we should keep the index name in each data center ? Can we keep the index name Fortigate for all the DC ? or we should keep unique name ?   Please suggest   
Hi Team, I have installed and utilizing the PingFederate application in our organization for few of our client servers. And now we want it to ingest the logs generated from these app into Splunk and... See more...
Hi Team, I have installed and utilizing the PingFederate application in our organization for few of our client servers. And now we want it to ingest the logs generated from these app into Splunk and utilize the Dashboards to view the statistics present in the Splunk Search head. So I have installed PingFederate App for Splunk (https://splunkbase.splunk.com/app/976/) in our Splunk Search heads. The PingFederate application are running in our client servers so I have logged into one of the client server where Ping Federate app has been installed and I can see that the Splunk Universal Forwarder (UF) has been already installed in the client server and it is reporting in Splunk. So now I have navigated to the directory in which PingFederate is installed and I can see the version we are using for PingFederate is 10.2.1 PingFederate: I have followed the documentation for PingFederate ( https://docs.pingidentity.com/bundle/pingfederate-93/page/qst1564002981075.html) and tried to setup in the client server. But I can see in the documentation we are having 5 Logger elements and I am quite not sure which one should I need to uncomment and which RollingFile should I need to uncomment in the log4j2.xml file? So kindly help on the same. And post uncommenting the required stanza should I need to restart the PingFederate service to consider into effect? Kindly help on the same. And  if the log file is generated in the log directory then what index and sourcetype information should I need to use? So that the dashboards which is present in the app should work as expected for both the Apps? Or if I missing out anything then kindly help to correct me on the same as well.
Hi Team, I have installed and utilizing the PingAccess application in our organization for few of our client servers. And now we want it to ingest the logs generated from these app into Splunk and u... See more...
Hi Team, I have installed and utilizing the PingAccess application in our organization for few of our client servers. And now we want it to ingest the logs generated from these app into Splunk and utilize the Dashboards to view the statistics present in the Splunk Search head. So I have installed PingAccess App for Splunk (https://splunkbase.splunk.com/app/5368/) in our Splunk Search heads. The PingAccess application are running in our client servers so I have logged into one of the client server where Ping Access app has been installed and I can see that the Splunk Universal Forwarder (UF) has been already installed in the client server and it is reporting in Splunk. So now I have navigated to the directory in which PingAccess is installed and I can see the version we are using for PingAccess is 6.2.0 Ping Access: So as per the documentation provided I have (https://docs.pingidentity.com/bundle/pingaccess-63/page/gyx1564006725145.html) followed the steps i.e. edited the log4j2.xml file and uncommented the below lines from the Stanza. <AppenderRef ref="ApiAudit2Splunk"/> <AppenderRef ref="EngineAudit2Splunk"/> <AppenderRef ref="AgentAudit2Splunk"/> But in the xml file I couldn’t able to find the below lines as mentioned in the document? <AppenderRef ref="SidebandClientAudit2Splunk"/> <AppenderRef ref="SidebandAudit2Splunk"/> So what should I need to do if these lines are missing in the xml file? Shall I skip it or should I need to include it? Kindly help on the same please. And post performing the steps do I need to restart the PingAccess service so that the respective log files would be generated in the log directory? Kindly confirm on this part as well. And if the log file is generated in their respective directory then I believe we need to ingest the below mentioned log files into Splunk and we already have the setup (Splunk universal forwarder) running in the server so we can just go ahead and use any of the index and sourcetype information or do we have anything specific index name along with sourcetype (For PingAccess App) which need to be created in Splunk and use them to ingest the logs ? Kindly confirm on the same as well. (Since we want the dashboards installed in the Splunk Search head should be showing up the statistics). • pingaccess_engine_audit_splunk.log • pingaccess_api_audit_splunk.log • pingaccess_agent_audit_splunk.log   So kindly help me on my query..
Hi All,  I have a dashboard with chart which is representing Count by Dates. I have increased the font size of count value but need help in increasing the font size of Date. Below XML I'm using curr... See more...
Hi All,  I have a dashboard with chart which is representing Count by Dates. I have increased the font size of count value but need help in increasing the font size of Date. Below XML I'm using currently.  <row> <panel> <html> <style> #test th{ font-size: 15px !important; font-weight: bold !important; } </style> </html> </panel> </row> Thanks  
Hello All, Sorry, if this has already been answered. I'm a beginner and looking for some help. I built a dashboard which has 3 columns.  Employee ID | File Name | Download report link (Make a call... See more...
Hello All, Sorry, if this has already been answered. I'm a beginner and looking for some help. I built a dashboard which has 3 columns.  Employee ID | File Name | Download report link (Make a call to external URL) When the user click on download report link button, I've to make a external webservice call which takes the filename as input and return the content/file. i was able to make the call and everything was working. However due to security concerns, now the API requires an authorization token (static value) to be sent as an HTTP Header.  I'm not able to make much progress on how to set this http header when making an external webservice call from Splunk {cloud} dashboard.   Thanks for your help.  
Has anyone ever set up a script to monitor ESTABLISHED sessions for Windows using a netstat command? I was looking to copy/modify the win_listening_ports.bat script that is part of the Splunk for Wi... See more...
Has anyone ever set up a script to monitor ESTABLISHED sessions for Windows using a netstat command? I was looking to copy/modify the win_listening_ports.bat script that is part of the Splunk for Windows app but am not having much luck. I want to gather local address:local port and foreign address:foreign port, can anyone help? BTW - I am aware of the WinNetMon inbound;outbound monitors that are part of the same Windows app. I don't want to capture all connections, rather see a snapshot at specified intervals, like once hourly. Thanks in advance!
I would like TestResult to give output "1" if there are "Pass" or "Completed" in Status and "0" if otherwise. How to change this query below to check for both strings? | eval TestResult=if(like(Stat... See more...
I would like TestResult to give output "1" if there are "Pass" or "Completed" in Status and "0" if otherwise. How to change this query below to check for both strings? | eval TestResult=if(like(Status, "%Completed%"), 1, 0)
How to send On-Prem Windows Defender AV DATA to On Splunk
Hi, I need to reinstall operation system for my computer installed splunk enterprise recently, and I want to backup the old data, then import it into the new reinstalled system, what should I do? Is ... See more...
Hi, I need to reinstall operation system for my computer installed splunk enterprise recently, and I want to backup the old data, then import it into the new reinstalled system, what should I do? Is there any tutorial?
I am looking to collect Netflow data on a host, where I have installed Splunk UF along with Stream addon. I want to send this data to a client's Splunk Indexer,  on port 9997. While the doc states,... See more...
I am looking to collect Netflow data on a host, where I have installed Splunk UF along with Stream addon. I want to send this data to a client's Splunk Indexer,  on port 9997. While the doc states, configure indexer to receive Stream data on port 9997, however, in the "Set up data collection on remote machines" section, it requires HEC tokens for the indexer. Is there a way to configure the addon to to Netflow data to Indexers on port 9997 instead of on the HEC token ?
Hi All,The following search has been created to identify the unsecure communications.Also i need to see the end-to-end connectivity if it’s successful on unsecure protocol. For example, some services... See more...
Hi All,The following search has been created to identify the unsecure communications.Also i need to see the end-to-end connectivity if it’s successful on unsecure protocol. For example, some services are configured in F5 with HTTP redirection profile. Now ultimately you will observe port 80 traffic on Edge firewall but F5 then redirects it to HTTPS.so please could you help us to achive these? (index=paloalto OR index=juniper) (dest_port= 20 OR dest_port= 22 OR dest_port= 23 OR dest_port= 53 OR dest_port= 139 OR dest_port= 80 OR dest_port= 445 OR dest_port= 3389 OR dest_port= 21) | lookup Port_service.csv dest_port as dest_port OUTPUT service | stats count values(src_ip) by dest_port service dest_ip transport action  | table values(src_ip) dest_port service transport dest_ip count action
Hi All, I am trying to create a dashboard with response time between the transactions.  For example, let's i have data output as below: _time                                                  Direct... See more...
Hi All, I am trying to create a dashboard with response time between the transactions.  For example, let's i have data output as below: _time                                                  Direction                ABCD                             Transaction_ID       2021-07-13 18:56:58.487            in                abcd.008.001.08                      123456789 2021-07-13 18:56:58.603          out               abcd.008.001.08                     123456789 2021-07-13 18:56:59.981            in                abcd.002.001.10                     123456789 2021-07-13 18:57:00.062          out               abcd.002.001.10                      123456789 2021-07-13 18:57:00.565          out               abcd.002.001.10                     123456789 From above output I would like to calculate time difference between first and fourth event (i.e 2021-07-13 18:57:00.062 - 2021-07-13 18:56:58.487) and time difference between second and third event ( i.e 2021-07-13 18:56:59.981 - 2021-07-13 18:56:58.603).   Can someone help me with this query. Highly appreciate your help in this context.
Hello Guys I have a sort of quick question that has been challanging me.   I use this SPL to extract some info     | stats values(*) as * by CLIENTE_OUTPOST     Sometimes I use list sometimes... See more...
Hello Guys I have a sort of quick question that has been challanging me.   I use this SPL to extract some info     | stats values(*) as * by CLIENTE_OUTPOST     Sometimes I use list sometimes I use values... and I want to be able to extract all values in the multivalue field "PROMOS" in a new field called "ADDED" this is an example:   from this:   CLIENT_OUTPOST PROMOS DATE VOUCHER LIZZA_90 UIK_IO 87585 A_IDYD 78545 10584 18-05-2021 XX-PO-89   I want this: CLIENT_OUTPOST PROMOS DATE VOUCHER ADDED LIZZA_90 UIK_IO 87585 A_IDYD 78545 10584 18-05-2021 XX-PO-89 87585 78545 10584 I will be so thankfull if you can help me out, just for reference I will eaither have strings with characters or strings that are numbers... but i have tried mvfilter, rex without any luck thank you so much guys!   Love,   Cindy
action feature version location count ?difference? A f1 v1 WA 120 0 A f1 v1 OR 110 10 A f1 v1 CA 115 5 B f1 v1 AZ 120 0 A f1 v2 WA 14 1 A f1 v2 O... See more...
action feature version location count ?difference? A f1 v1 WA 120 0 A f1 v1 OR 110 10 A f1 v1 CA 115 5 B f1 v1 AZ 120 0 A f1 v2 WA 14 1 A f1 v2 OR 10 5 B f1 v2 AZ 15 0 I got a table of info above: action, feature, version, location, and count.  Could anyone help me to find the last column "difference" here?   A group is identified by same feature and version combination. so in the example table, the first four rows(f1+v1) are one group, and the last three rows(f1+v2) are the second group.  within each group, difference = count B - count A.    for example: row1, difference = count (B, f1, v1, AZ) - count (A, f1,v1,WA) = 120-120=0 row2, difference = count (B, f1, v1, AZ) - count (A, f1,v1,OR) = 120-110=10 difference of countB itself is 0.
Hello, I am trying to rename some fields pre-index using props.conf and it's not working.  Props below. [onelogin:event] EVAL-app_name = app EVAL-src_ip = ipaddr Also tried using FIELDALIAS to... See more...
Hello, I am trying to rename some fields pre-index using props.conf and it's not working.  Props below. [onelogin:event] EVAL-app_name = app EVAL-src_ip = ipaddr Also tried using FIELDALIAS to no avail.  The props file is  local  dir on the HF (/opt/splunk/etc/apps/splunk_ta_onelogin/local).   bt debug shows the intended config .. /opt/splunk/etc/apps/splunk_ta_onelogin/local/props.conf [onelogin:event] /opt/splunk/etc/apps/splunk_ta_onelogin/local/props.conf EVAL-app_name = app /opt/splunk/etc/apps/splunk_ta_onelogin/local/props.conf EVAL-src_ip = ipaddr Also tried putting the props file in system local, no effect.  How do I troubleshoot this?  Thanks!  
Hi, I've upgraded from splunk 6.6 to 8.2(single instance) and all my realtime alerts(per result) keep triggering for the same event every 5 minutes(throttle period with usermail as suppresed field )... See more...
Hi, I've upgraded from splunk 6.6 to 8.2(single instance) and all my realtime alerts(per result) keep triggering for the same event every 5 minutes(throttle period with usermail as suppresed field ) The only way to stop it is restarting splunk or deactivating the alert. I deactivated all alerts and saved searchs and left only one alert producing a single event with the same result, the alert is triggered every five minutes for the same event. It is a simple query  from a server log filtering only errors. I've activated the SavedSplunker debug log and the only strange thing is this message every minute after the event was produced. DEBUG SavedSplunker - failed to write suppressed results to /opt/splunk/var/run/splunk/dispatch/rt_scheduler_Z2VybWFuLnNhbnRhbmE_aHlkcmEtYWRtaW4__RMD53954c1af0f5d4e15_at_1626209231_1.144/results.csv.gz Thanks in advance
I am trying to update splunk saved searches schedule by calling rest api in a bash script, I am reading cron and search title from a csv file and try to run a loop. It is working fine partially. It i... See more...
I am trying to update splunk saved searches schedule by calling rest api in a bash script, I am reading cron and search title from a csv file and try to run a loop. It is working fine partially. It is changing schedule only for private searches not global one.   #! /bin/bash INPUT=data.csv OLDIFS=$IFS IFS=',' [ ! -f $INPUT ] && { echo "$INPUT file not found" exit 99; } echo "-----------------------------------------------------" >> output.txt while read app cron search_name do SEARCH=${search_name// /%20} QUERY="https://localhost:8089/servicesNS/admin/$app/saved/searches/$SEARCH" echo $QUERY >> output.txt echo -e "\n---------------------------------------------------------\n" echo -e "---Search Name-->$search_name" echo -e "---Rest API URI-->$QUERY" curl -i -k -u user:password $QUERY -d cron_schedule=$cron -d output_mode=json >> response.txt done < $INPUT IFS=$OLDIFS
I am creating a dashboard for my team. So far, I've been able to implement chain searches by modifying the source code. However, they are based on a live base search. My goal is to power the base sea... See more...
I am creating a dashboard for my team. So far, I've been able to implement chain searches by modifying the source code. However, they are based on a live base search. My goal is to power the base searches off of a report instead of a live search. Is that possible?
Hi, there, I am working on following search and somehow cannot append the search as part of the "fit DensityFunction" table result from search macro "search_macro_smart($cef_ruleid$)" splunk_server... See more...
Hi, there, I am working on following search and somehow cannot append the search as part of the "fit DensityFunction" table result from search macro "search_macro_smart($cef_ruleid$)" splunk_server="splunk" index="area" source="area1" sourcetype="dsystem_events" | stats count by cef_ruleid | sort - count | head 85 | map search="search `search_macro_smart($cef_ruleid$)`" maxsearches=85 | join [| makeresults | eval current_id=$cef_ruleid$ | stats values(current_id)]   The search macro "search_macro_smart($cef_ruleid$)"  will be generate 85 raw of data for outlier with data in past 45 days and I need the append "cef_ruleid " as part of the search macro output on dashboard so we can know the detected outlier belong to which ""cef_ruleid "   Your help is appreciated, mason