All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Giuseppe, Blanking out some of the details with XXXX for anonymity. I remember we did try with Regex = . for the TCP once too. Sample event from TCP: <189>logver=700140601 timestamp=171471... See more...
Hello Giuseppe, Blanking out some of the details with XXXX for anonymity. I remember we did try with Regex = . for the TCP once too. Sample event from TCP: <189>logver=700140601 timestamp=1714717074 devname="XXXXX" devid="XXXXX" vd="root" date=2024-05-03 time=06:17:54 eventtime=1714688275070439553 tz="XXX" logid="0001000014" type="traffic" subtype="local" level="notice" srcip=XX.XX.XX.XX srcname="XXXXXX" srcport=31745 srcintf="port1" srcintfrole="undefined" dstip=XXX.XXX.XXX.XXX dstname="XXX" dstport=443 dstintf="root" dstintfrole="undefined" srccountry="XXXX" dstcountry="XXX" sessionid=68756048 proto=6 action="deny" policyid=1 policytype="local-in-policy" poluuid="7575f13c-5066-51ed-1e15-40b0e5867f81" service="HTTPS" trandisp="noop" app="HTTPS" duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 rcvdpkt=0 appcat="unscanned" crscore=5 craction=262144 crlevel="low" Sample event from UDP: May 3 14:21:57 10.XX.XX.XX logver=700140601 timestamp=1714746117 devname="XXX" devid="XXX" vd="XXX" date=2024-05-03 time=14:21:57 eventtime=1714717317787162683 tz="XXX" logid="0000000013" type="traffic" subtype="forward" level="notice" srcip=XX.XX.X.X srcport=38915 srcintf="port5" srcintfrole="lan" dstip=XX.XX.XX.XX dstport=443 dstintf="port9" dstintfrole="wan" srccountry="Reserved" dstcountry="XXXX" sessionid=759555888 proto=17 action="accept" policyid=1 policytype="policy" poluuid="7ade8e92-454b-51e9-5c91-4feddb630366" policyname="XXXXX" service="udp/443" trandisp="snat" transip=XX.XX.XX.XXX transport=38915 appid=40169 app="QUIC" appcat="Network.Service" apprisk="low" applist="XXX" duration=49 sentbyte=1228 rcvdbyte=0 sentpkt=1 rcvdpkt=0 utmaction="block" countapp=1 It seems a bit weird to me that the TCP input doesn't have the data and time infront but starts with <189>, wonder if that is normal.  
Hi @daniel333 , not all the data source from Azure and Office 365 is free, someone is subject to a fee. Check if the data source you want is one of them. In addition, you could ask help to Splunk ... See more...
Hi @daniel333 , not all the data source from Azure and Office 365 is free, someone is subject to a fee. Check if the data source you want is one of them. In addition, you could ask help to Splunk Support,, don't ask help to Microsoft Support because they always answer: "ask to splunk", because Splunk is considered a competitor by Microsoft. Ciao. Giuseppe
Hi @myte , as you like! but scheduling the two searches: |.... main search... | bucket _time span=1h | stats count BY _time | stats avg(count) AS AverageCount max(count) AS MaxCount | e... See more...
Hi @myte , as you like! but scheduling the two searches: |.... main search... | bucket _time span=1h | stats count BY _time | stats avg(count) AS AverageCount max(count) AS MaxCount | eval AverageCount=round(AverageCount,2), MaxCount=round(MaxCount,2), Type="Per Hour" | collect index=my_summary and |.... main search... | bucket _time span=1m | stats count BY _time | stats avg(count) AS AverageCount max(count) AS MaxCount | eval AverageCount=round(AverageCount,2), MaxCount=round(MaxCount,2), Type="Per Minute" | stats values(AverageCount) AS AverageCount values(MaxCount) AS MaxCount BY Type | collect index=my_summary and running this search when you need resuts index=my_summary | table Type AverageCount MaxCount you have the same result in a single search and a quicker search.   let us know if you need more help, and, for the other people of Community, please, accept one answer. Ciao. Giuseppe P.S.: Karma Points are appreciated by al the Contributors
Hello Team, I am using a blueprint lambda to process cloudwatch logs to splunk. I have configured HEC url & HEC token in Splunk we UI. Installed splunk in AWS linux server. But while invoking the la... See more...
Hello Team, I am using a blueprint lambda to process cloudwatch logs to splunk. I have configured HEC url & HEC token in Splunk we UI. Installed splunk in AWS linux server. But while invoking the lambda function getting above error. HEC URL - http://54.67.83.247:8088/services/collector/raw Whitelisted the IP in security group of ec2 instance where splunk is installed. Can anyone help me to fix this issue ?
I had to set 744 permissions for folders, that solved my issue
Hi @yh , the TRANSFORMS command in the transforms.conf is a regex on the raw data, it doesn't work on fields. Are you sure that in the TCP raw data you have the string you configured? cuold you sh... See more...
Hi @yh , the TRANSFORMS command in the transforms.conf is a regex on the raw data, it doesn't work on fields. Are you sure that in the TCP raw data you have the string you configured? cuold you share some samples of both the events that you want to filter (both UDP and TCP)? Ciao. Giuseppe
These are some basic examples once you have ingested the data, the same principles apply to Windows metrics Analyse the data, work out the fields that contain the data and work on SPL, until it give... See more...
These are some basic examples once you have ingested the data, the same principles apply to Windows metrics Analyse the data, work out the fields that contain the data and work on SPL, until it gives you the results This example shows how you can monitor linux metrics - change the threshold (| where cpu_load_percent >=1) index=linux sourcetype=cpu | fields _time, host, cpu_load_percent, | eval date_time =strftime(_time, "%d/%m/%Y %H:%M:%S") | where cpu_load_percent >=1 | table date_time, host, cpu_load_percent | dedup host This example shows how you can memory percent % linux metrics - change the threshold (| where PercentMemory >=0) index=linux sourcetype=ps | fields _time, host, PercentMemory | eval date_time =strftime(_time, "%d/%m/%Y %H:%M:%S") | where PercentMemory >=0 | table date_time, host, PercentMemory | dedup host Do similar for Disk/processor etc
Thanks for the reply. We did do a netstat check and it's a TCP connection in between the source host and Splunk. Something similar that was tried, example [source::udp:514] TRANSFORMS-null1= ud... See more...
Thanks for the reply. We did do a netstat check and it's a TCP connection in between the source host and Splunk. Something similar that was tried, example [source::udp:514] TRANSFORMS-null1= udp_setnull [source::tcp:514] TRANSFORMS-null2= tcp_setnull However weirdly the TCP part is not working. Once we even tried removing UDP and just have the TCP portion, but it still doesn't work. Very weird. TRANSFORMS-null= setnull_tcp_traffic
Check that are actually receiving traffic from TCP sudo tcpdump -i <my_interface> tcp port <splunk_port> sudo tcpdump -i <my_interface> udp port <splunk_port> Try the below, and see that correct... See more...
Check that are actually receiving traffic from TCP sudo tcpdump -i <my_interface> tcp port <splunk_port> sudo tcpdump -i <my_interface> udp port <splunk_port> Try the below, and see that corrects it. [source::udp:514] TRANSFORMS-null= setnull_udp_traffic [source::tcp:514] TRANSFORMS-null= setnull_tcp_traffic [setnull_udp_traffic] REGEX = dstip=8\.8\.8\.8 DEST_KEY = queue FORMAT = nullQueue [setnull_tcp_traffic] REGEX = dstip=8\.8\.8\.8 DEST_KEY = queue FORMAT = nullQueue
I am receiving the logs and required query to monitor top 10 highest use CPU, Memory, processor and Disk
Hello I am referring to the following documentation Route and filter data - Splunk Documentation I would like to discard some syslog data coming from the firewall in my case for instance before i... See more...
Hello I am referring to the following documentation Route and filter data - Splunk Documentation I would like to discard some syslog data coming from the firewall in my case for instance before it goes through indexing. For instance in props under system I have this [source::udp:514] TRANSFORMS-null= setnull [source::tcp:514] TRANSFORMS-null= setnull And for the transforms if I want to filter out traffic going to Google DNS [setnull] REGEX = dstip=8\.8\.8\.8 DEST_KEY = queue FORMAT = nullQueue I have tried renaming the transforms and duplicating set null with different names, however the event filtering only works on the UDP source but does not work on the TCP source. Did I miss out anything as it feels really weird that the event discarding does not work on the TCP syslog source. Any ideas, or alternatives for discarding of events from an AIO Splunk Setup? Thanks in advance  
json_extract is documented as not handling periods in the names and suggests using json_extract_exact, but it does not appear to work with an array of keys  | makeresults | fields - _time | eval spl... See more...
json_extract is documented as not handling periods in the names and suggests using json_extract_exact, but it does not appear to work with an array of keys  | makeresults | fields - _time | eval splunk_path="{\"system.splunk.path\":\"/opt/splunk/\",\"system.splunk.path2\":\"/opt/splunk/\"}" | eval paths=mvappend("system.splunk.path","system.splunk.path2") | eval extracted_path=json_extract_exact(splunk_path, "system.splunk.path") | eval extracted_path2=json_extract_exact(splunk_path, "system.splunk.path2") | eval extracted_paths=json_extract_exact(splunk_path, paths)
Here is a very simple example of "joining" two different datasets together based on their common ID. Almost all of the example is just setting up some example data. What you really need are the last ... See more...
Here is a very simple example of "joining" two different datasets together based on their common ID. Almost all of the example is just setting up some example data. What you really need are the last 3 lines. If you paste this to a search window it will randomly return you some results if the PRODUCT contains MISMATCH - if you remove the last line of the example you will all results of the made up data. | makeresults | fields - _time ``` Make some data for Sourcetype=autos ``` | eval sourcetype="autos" | eval MAKE=split("Audi,Porsche,Mercedes",",") | mvexpand MAKE | eval MODEL=case(MAKE="Audi", split("AU-123,AU-988", ","), MAKE="Porsche", split("PO-123,PO-988", ","), MAKE="Mercedes", split("MX-123,MX-988", ",")) | mvexpand MODEL | eval VIN=case(MAKE="Audi", split("AU-VIN:12345678,AU-VIN:9876543", ","), MAKE="Porsche", split("PO-VIN:12345678,PO-VIN:9876543", ","), MAKE="Mercedes", split("MX-VIN:12345678,MX-VIN:9876543", ",")) | mvexpand VIN | eval VIN=MODEL.":".VIN ``` Make some identical data for Sourcetype=autos ``` | append [ | makeresults | fields - _time | eval sourcetype="cars" | eval MANUFACTURER=split("Audi,Porsche,Mercedes",",") | mvexpand MANUFACTURER | eval PRODUCT=case(MANUFACTURER="Audi", split("AU-123,AU-988", ","), MANUFACTURER="Porsche", split("PO-123,PO-988", ","), MANUFACTURER="Mercedes", split("MX-123,MX-988", ",")) | mvexpand PRODUCT | eval SN=case(MANUFACTURER="Audi", split("AU-VIN:12345678,AU-VIN:9876543", ","), MANUFACTURER="Porsche", split("PO-VIN:12345678,PO-VIN:9876543", ","), MANUFACTURER="Mercedes", split("MX-VIN:12345678,MX-VIN:9876543", ",")) | mvexpand SN | eval SN=PRODUCT.":".SN | eval PRODUCT=PRODUCT.if(random() % 100 < 10, "-MISMATCH", "") ] ``` Take the common field ``` | eval COMMON_ID=if(sourcetype="autos", VIN, SN) | stats values(*) as * by COMMON_ID | where MAKE!=MANUFACTURER OR MODEL!=PRODUCT Don't ever consider JOIN as the first option - it's not a Splunk way of doing things and has numerous limitations. Splunk uses stats ... BY COMMON_FIELD. Hope this helps
All,  I am currently working with Splunk Add-on for Microsoft Office 365 4.5.1 on Linux. All inputs enabled and collecting. I am trying to see who approved a Privileged Identity Management event. I... See more...
All,  I am currently working with Splunk Add-on for Microsoft Office 365 4.5.1 on Linux. All inputs enabled and collecting. I am trying to see who approved a Privileged Identity Management event. I can't find the relevant events in Splunk but I do find them in Entra ID and Microsoft Purview dashboards?  1. Is there a TA I am missing?  2. If indeed this TA is not correctly scripting this data in, do I open a support case? Or is there another custom way to his that endpoint and snag that data.  thanks, -Daniel   
@kuul13  This is a straightforward use of the chart command, see this run anywhere example | makeresults count=20 | fields - _time | eval ClientName=mvindex(split("ABC",""), random() % 3) | mvexpan... See more...
@kuul13  This is a straightforward use of the chart command, see this run anywhere example | makeresults count=20 | fields - _time | eval ClientName=mvindex(split("ABC",""), random() % 3) | mvexpand ClientName | eval ClientName="Client ".ClientName | eval apiName="retrievePayments".mvindex(split("ABCD",""), random() % 4) | chart count over ClientName by apiName This sets up some example data and then uses the chart command do to the tabling you need.
When dealing with time picker,  the addinfo command is your friend, as that will give you info_min_time and info_max_time that are the actual earliest and latest time ranges of the search, so you can... See more...
When dealing with time picker,  the addinfo command is your friend, as that will give you info_min_time and info_max_time that are the actual earliest and latest time ranges of the search, so you can use these to compute things. As for avg/min, while individual counts per minute can be wildly different, if the measurement period is 2 hours and the total count is 10,000, then the avg/hour has to be 5,000 even though the 1st hour may have been 4,000 and the second hour 6,000. Also, if the avg/min over a period of 60 minutes is 100 (i.e. a total of 6,000), then the avg/hour must be 6000, i.e. the total.   
Hi.  I've been a very basic user of Splunk for a while, but now have a need to perform more advanced searches.  I have two different sourcetypes within the same index.  Examples of the fields are bel... See more...
Hi.  I've been a very basic user of Splunk for a while, but now have a need to perform more advanced searches.  I have two different sourcetypes within the same index.  Examples of the fields are below.    index=vehicles Sourcetype=autos VIN MAKE MODEL Sourcetype=cars SN MANUFACTURER PRODUCT I'd like to search and table VIN, MAKE, MODEL, MANUFACTURER and PRODUCT where -  VIN=SN MAKE <> MANUFACTURER OR MODEL<>PRODUCT Basically, where VIN and SN match, if one or both of the other fields don't match, show me. I'm not sure if a join (VIN and SN) statement is the best approach in this case.  I've researched and found questions and answers related to searching and comparing multiple sourcetypes.  But, I've been unable to find examples that include conditions.  Any suggestions you can provide would be greatly appreciated. Thank you!
When you tried my suggestion, please tell me what happened and what still is not working.
Hi, I am new to Splunk. I am trying to figure out how to extract count of errors per api calls made for each client. I have following query that i run : `index=application_na sourcetype=my_logs... See more...
Hi, I am new to Splunk. I am trying to figure out how to extract count of errors per api calls made for each client. I have following query that i run : `index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* ( returncode=Error OR returncode=Communication_Error) | rex field=message "Message=.* \((?<apiName>\w+?) -" | lookup My_Client_Mapping client | table ClientName, apiName' This query parses message to extract the apinames that starts with `retrievePayments`.  And shows this kind of results ClientName  apiName Client A          retrievePaymentsA Client B          retrievePaymentsA Client C         retrievePaymentsB Client A         retrievePaymentsB   I want to see an output where my wildcard apiName are transposed and show error count for every client.  Client      retrievePaymentsA    retrievePaymentsB     retrievePaymentsC    retrievePaymentsD Client A  2                                     5                                             0                                         1 Client B  2                                     2                                             1                                         6 Client C  8                                     3                                             0                                         0 Client D  1                                     0                                            4                                         3 Any help would be appreciated.
Splunk version is 9.1.0.2 We are trying to resolve searches that are orphaned from the report "Orphaned Scheduled Searches, Reports, and Alerts". The list does not match the what we see under the "R... See more...
Splunk version is 9.1.0.2 We are trying to resolve searches that are orphaned from the report "Orphaned Scheduled Searches, Reports, and Alerts". The list does not match the what we see under the "Reassign Knowledge Objects" since we resolved all of those.  I am unable to find the searches (I believe they are private) but want to know why I, as an admin, am unable to manage these searches. If anything just to disable them.. Many of the users have since left our company and I need to manage their items. Please help!!!