All Topics

Top

All Topics

Hello, Not super familiar with Splunk yet but I have the following scenario. 1 - Several applications on Public Cloud Provider 2 - Heavy Forwarder deployed on Public Cloud Provider 3 - Splunk... See more...
Hello, Not super familiar with Splunk yet but I have the following scenario. 1 - Several applications on Public Cloud Provider 2 - Heavy Forwarder deployed on Public Cloud Provider 3 - Splunk Cloud   The applications log volume on cloud is huge but not related and not wanted in the SIEM but we cannot filter data on the source itself. Is it possible to receive the full volume on the Heavy Forwarder and the HF selects and DISCARDS data before sending to Splunk Cloud? Or maybe we can configure HF to query the log sources for a specific string and bring only what we want?   Thank you!
In this scenario, each HOST_NAME has many HOME_LOCATIONS. Each HOME_LOCATION has unique info - in this case, the RDBMS_VERSION and the DATABASE_RELEASE. I am trying to produce a simple statistics t... See more...
In this scenario, each HOST_NAME has many HOME_LOCATIONS. Each HOME_LOCATION has unique info - in this case, the RDBMS_VERSION and the DATABASE_RELEASE. I am trying to produce a simple statistics table that shows each unique HOME_LOCATION (and accompanying info) for each HOST_NAME.  -------------------- When I run the below (1st screen shot) the data is aligned as I'd expect it to | stats values(HOME_LOCATION) values(RDBMS_VERSION) by HOST_NAME When I run the below (2nd screen shot) and add the third values field in red, the data becomes misaligned for some rows | stats values(HOME_LOCATION) values(RDBMS_VERSION) values(DATABASE_RELEASE) by HOST_NAME What am I missing or doing incorrectly?
Hi, We have Server Visibility enabled, and I can see the processes for nodes. We want to monitor if a process is running or not. Is there some trick to finding the metric in the metric browser? Wit... See more...
Hi, We have Server Visibility enabled, and I can see the processes for nodes. We want to monitor if a process is running or not. Is there some trick to finding the metric in the metric browser? With 5-8 thousand nodes it's extremely difficult to navigate in the UI. Does someone have a base metric starting point or some trick to copy the metric URL (similar to BTs and other stuff)?  Thanks Chris 
I was trying the mentioned operation but not getting the expected result. 1. need ID from sub search which is  the join parameter 2. apply stats on the outer query Stats are getting applied to ... See more...
I was trying the mentioned operation but not getting the expected result. 1. need ID from sub search which is  the join parameter 2. apply stats on the outer query Stats are getting applied to inner query instead index=* methodName=* success=true | join ID [|SEARCH index=* Success=true ]|stats count(eval(Success="true")) as SuccessRate,count(eval(Success="false")) as FailureRate by Action Appreciate quick help on this
Hello, I am interested in knowing the best storage option among these three (DAS, NAS or SAN) when you want to store the data from indexers for the long term.   Thanks, 
The query that is generated by splunk is quite convoluted and I would like to provide my own query for this "Open In Search" on 1 of the panels in my dashboard. Is it possible to do so?   edit: C... See more...
The query that is generated by splunk is quite convoluted and I would like to provide my own query for this "Open In Search" on 1 of the panels in my dashboard. Is it possible to do so?   edit: Corrected to "Open in Search"
Hi Team, We are constantly getting below errors in forwarders splukd.log ERROR TCPOutputQ - Unexpected event id=4 ERROR TCPOutputQ - Unexpected event id=7 However we have observed data is get... See more...
Hi Team, We are constantly getting below errors in forwarders splukd.log ERROR TCPOutputQ - Unexpected event id=4 ERROR TCPOutputQ - Unexpected event id=7 However we have observed data is getting ingested to splunkindexers with out any issue. can any one please help us to understand what exactly this error is related to   With Regards, Krishna.
I have my Sonicwall logfiles coming into Splunk. By searching this index I want to replace "dst" (Destination IP address) without portnumber and interface with (for example) RegEx. Note that the form... See more...
I have my Sonicwall logfiles coming into Splunk. By searching this index I want to replace "dst" (Destination IP address) without portnumber and interface with (for example) RegEx. Note that the formats used for "src" and "dst" = (ip address):(port number):(interface) So when I do a search like (NOTE: the red sentence is my own attempt, however, it does not give a result I had in mind.): index=sonicwall msg="Connection Opened" OR msg="Connection Closed" earliest=-2m latest=-1m | eval dst=if(match(dst, "\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}:\d{1,5}:X\d{1}"), dst, replace(dst, "(\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}):\d{1,5}:X\d{1}","\1")) | stats first(_time) as _time by src dst proto msg | inputlookup append=t firewall_open_connections | fillnull msg value="Connection Opened" | eval closed=if(msg="Connection Closed",_time,"1") | eval open=if(msg="Connection Opened",_time,"1") | stats first(open) as open first(closed) as closed by src dst proto | where open > closed | rename open as _time | fields src dst proto _time | outputlookup firewall_open_connections Results in: src dst proto _time 10.0.1.5:50492:X2 8.8.8.8:53:X1 udp/dns 2022-06-14 15:40:08 192.168.1.100:37016:X0 54.81.233.206:443:X1 tcp/https 2022-06-14 15:39:01 192.168.1.100:38376:X0 104.244.42.130:443:X1 tcp/https 2022-06-14 14:49:14 192.168.1.100:38611:X0 172.217.132.170:443:X1 udp/https 2022-06-14 15:37:51   Now I would like the "dst" results to be stripped of :(port number):(interface)or :(interface). In other words, only the IP address should remain How do I do that within my query in Splunk with for example RegEx (or another method)? Any tip is welcome, am very new to Splunk.
I would like to extract a specific part of data from its raw data, The data that is to be extracted is ID, Which is highlighted, "aepassword": "kmdAkcu)n>Ec_.a(m5P7?8-n", "aeci": { "outgoing_ser... See more...
I would like to extract a specific part of data from its raw data, The data that is to be extracted is ID, Which is highlighted, "aepassword": "kmdAkcu)n>Ec_.a(m5P7?8-n", "aeci": { "outgoing_server": "mailrv.aaa.com", "email_footer": "C:\\ProgramData\\bbb\\AutomationNote\\Email\\aa_Mail_Footer.png", "email_header": "C:\\ProgramData\\bbb\\AutomationNote\\Email\\aa_Mail_Header.png", "signature": "C:\\ProgramData\\bbb\\Automation\\Email\\bb_Email_Signature.txt", "requires_authentication": "false", "reply-to": "us@aaa.com", "primaryaddress": "ussdev@aaa.com", "host": "ussdev@bbb.com", "entity_alternate_names": "usdev@aaa.com", "outgoing_port": "2675", "entityid": "wmid-1607548215055521", "name": "bbb_MailBox", "entitytype": "Sub-System", "entitytype": "Workplace", "technology": "O736i85", "tenantid": 1000011, "cloudprovider": "", "satellite": "sat-16107579705752592", "resourceid": null, "UDetails": { "creds": { "email": "NA" }, "id": 14, "name": "N/A" }, "encryptionKey": "5inqhg7ckj7klk2w4osk0", "user": { "id": 5, "name": "CRI Admin", "employeecode": "125", "email": "admin@aaa.com" },
Hi all, i have some data task name, execution date, link uploaded earlier. Now i want to add some more data related to the task name they are component name, number of components. If i upload the 2nd... See more...
Hi all, i have some data task name, execution date, link uploaded earlier. Now i want to add some more data related to the task name they are component name, number of components. If i upload the 2nd data in the form task name , component name, number of components will i be able to get all data together based on one common field task name. Can anyone knows is there any solution for this? My data are task name, execution date, link and the next set of data  is task name , component name, number of components.
Hi all,   I need to calculate the duration i.e. difference between endtime & starttime and display the same in a user friendly format.  I have looked at different posts on the forum and am using the ... See more...
Hi all,   I need to calculate the duration i.e. difference between endtime & starttime and display the same in a user friendly format.  I have looked at different posts on the forum and am using the same logic yet if you see my splunk results below,  the duration column shows numbers like 81, 82 , 96... which doesn't make sense.   Are these difference in secs ? Even if its secs, the math doesn't seem to be correct.    How can I make diff value show in a readable format like  81 seconds, or  00:00:81 ( HH:MM:SS) ?   | transaction eventID startswith=starting endswith=end | eval starttime = _time | eval endtime=_time+duration | eval duration = endtime-starttime | convert ctime(starttime)| convert ctime(endtime) | table starttime, endtime, duraton        
I have a list of products  (that i have in a csv lookup) with fields such as prod_name, product_ID, price_tag look up name : myproduct.csv I want to compare  all my products from my lookup, if t... See more...
I have a list of products  (that i have in a csv lookup) with fields such as prod_name, product_ID, price_tag look up name : myproduct.csv I want to compare  all my products from my lookup, if they are "price tagged" or not ?   I have an index and sourcetype that contains events of all the products that are "price tagged." index=all sourceype=all_price_tagged_poducts Fields : prod_ID (same as product_ID of the lookup) If the product_ID value from my lookup is present in any of the events in the sourcetype=all_price_tagged_poduct, then I know that all products in my .csv lookup are 'price tagged'  Need help to write a query for it.
Hello, I have a field that does not appear in the list of fields on the left when doing a search. I have looked for information on the internet about what could be the cause and the solution to thi... See more...
Hello, I have a field that does not appear in the list of fields on the left when doing a search. I have looked for information on the internet about what could be the cause and the solution to this problem, but in my case it is not because I do not make the search in "Verbose mode", it does not appear in less than 1% of events and it is not because I have not chosen All Fields in the "X more fields" section, which apparently are the reasons why most people have this problem. What surprises me is that when I create another "Extraction field" the field I need appears in the list of available fields, so I can't create another field that collects the same as the field in question (from the GUI). The only solution I have found, which in principle does not work for me because I need it to be visible in the list I mentioned before, is to do the search using the rex command or the extract reload=T command. So, my question is, do I have to make any changes in any configuration file or could I do something to make the field I need available in the list of available fields I mentioned above (the one in the left when you make a search)? Thanks in advance and best regards.
Hi, We would like to ingest some metrics from a third party to AppsDynamics. I would like to know if it is possible. I was thinking about the API but I have seen that we can't do it based on the of... See more...
Hi, We would like to ingest some metrics from a third party to AppsDynamics. I would like to know if it is possible. I was thinking about the API but I have seen that we can't do it based on the oficial information - Ingesting Events is the unique posibility to ingest something. Thanks, Carlos
Hello, My alert result is a table like this I set recipent as token $result.EMAIL_LIST$ and Trigger is [For each result], but for each row it send an email as expected. I want it to send to th... See more...
Hello, My alert result is a table like this I set recipent as token $result.EMAIL_LIST$ and Trigger is [For each result], but for each row it send an email as expected. I want it to send to the corresponding recipent their own block of data (Inline in email), for example NASB 1 row, COOBACK 4 rows, SEAB 9 rows, etc. I'm thinking about grouping them by EMAIL_LIST but the table don't look good and neat anymore. Does anyone have a solution for this. Thanks in advance.
I am configuring Splunk_TA_fortinet_fortigate and no data is indexed  what might be the issue  ?   the Splunk_TA_fortinet_fortigate is installed on Heavy Forwarder  input is defined  [splun... See more...
I am configuring Splunk_TA_fortinet_fortigate and no data is indexed  what might be the issue  ?   the Splunk_TA_fortinet_fortigate is installed on Heavy Forwarder  input is defined  [splunk@ilissplfwd09 local]$ cat inputs.conf [udp://GS-J7-FAZ3K-01-10g.corp.amdocs.com:55555] connection_host = none index = test sourcetype = fortigate_log [splunk@ilissplfwd09 local]$ from default/props.conf [fgt_log] TRANSFORMS-force_sourcetype_fgt = force_sourcetype_fortigate SHOULD_LINEMERGE = false EVENT_BREAKER_ENABLE = true   from logs   06-13-2022 12:44:04.870 +0300 INFO Metrics - group=udpin_connections, xxxxxxxxxx:55555, sourcePort=55555.000, _udp_bps=0.000, _udp_kbps=0.000, _udp_avg_thruput=0.000, _udp_kprocessed=0.000, _udp_eps=0.000 component = Metrics date_zone = 180 event_message = group=udpin_connections, xxxxxxxxxxxxx:55555, sourcePort=55555.000, _udp_bps=0.000, _udp_kbps=0.000, _udp_avg_thruput=0.000, _udp_kprocessed=0.000, _udp_eps=0.000 host = xxxxxxx index = _internal log_level = INFO source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd splunk_server_group = dmc_group_indexer splunk_server_group = dmc_indexerclustergroup_C7623105-1D08-4451-8FC9-DCCE1F03C748   no data is indexed and no error message are generated in internal indexes 
Hi There, I am having windows server 2008 without AD. would like to forward wineventlogs from windows server 2008 to Heavy forwarder running on Linux. Have tried  1. Native WEF 2. Syslog-Ng 3... See more...
Hi There, I am having windows server 2008 without AD. would like to forward wineventlogs from windows server 2008 to Heavy forwarder running on Linux. Have tried  1. Native WEF 2. Syslog-Ng 3.NXLog All are not working since it all requires domain subscription and i dont have AD. Have written powershell script to export wineventlogs but dont know how to forward this log to HF running on RHEL. Kindly let me know how to proceed. Thanks in Advance
I want to have multiple links between two nodes like below   Node1                      Node2 ###--------------------### ###--------------------###
If the data present in json format {[]} get extracted, however when data present in {} as shown below doesn't behave same. How fields and values can be extracted from data in {} _raw data: {"Aler... See more...
If the data present in json format {[]} get extracted, however when data present in {} as shown below doesn't behave same. How fields and values can be extracted from data in {} _raw data: {"AlertEntityId": "abc@domai.com", "AlertId": "21-3-1-2-4--12", "AlertType": "System", "Comments": "New alert", "CreationTime": "2022-06-08T16:52:51", "Data": "{\"etype\":\"User\",\"eid\":\"abc@domai.com\",\"op\":\"UserSubmission\",\"tdc\":\"1\",\"suid\":\"abc@domai.com\",\"ut\":\"Regular\",\"ssic\":\"0\",\"tsd\":\"Jeff Nichols <jeff@Nichols.com>\",\"sip\":\"1.2.3.4\",\"srt\":\"1\",\"trc\":\"abc@domai.com\",\"ms\":\"Grok - AI/ML summary, case study, datasheet\",\"lon\":\"UserSubmission\"}"} When I perform query "| table Data", I get the below result, But how to get values of "eid", "tsd". {"etype":"User","eid":"abc@domai.com","op":"UserSubmission","tdc":"1","suid":"abc@domai.com","ut":"Regular","ssic":"0","tsd":"Jeff Nichols <jeff@Nichols.com>","sip":"1.2.3.4","srt":"1","trc":"abc@domai.com","ms":"Grok - AI/ML summary, case study, datasheet","lon":"UserSubmission"}
hello In my dashboard, I need to compare 2 single panel value between 2 different times The first single panel stats the events on the last 15 minutes like this   | stats max(sys_session_coun... See more...
hello In my dashboard, I need to compare 2 single panel value between 2 different times The first single panel stats the events on the last 15 minutes like this   | stats max(sys_session_count) as session by host | stats sum(session) as session | table session   Now, what I need to do is to compare this current single panel value with the results one week before during the same slot time For example, today is the 13 of June and the current hour is 8:15 AM So in the second single panel, I need to display result for the 6 of June at 8:15 Here is what I am doing   `index` sourcetype="system" earliest=-7d@d+7h latest=-7d@d+19h | bin _time span=15m | eval time=strftime(_time,"%H:%M") | stats max(sys_session_count) as session by host time | stats sum(session) as session by time | eval current=now() | bin current span=15m | eval current=strftime(current,"%H:%M") | where time=current | table session time   But I think it's not good because whatever the time is (8:15, 8:30, 8:45...), the results is almot the same So is anybody have an idea in order to answer to my need correctly? thanks