All Topics

Top

All Topics

Hi, I have installed Splunk Universal Forwarder on several Windows servers, and they send their Windows logs to the indexers. All Windows logs are saved in the 'windows-index.' However, sometimes, ... See more...
Hi, I have installed Splunk Universal Forwarder on several Windows servers, and they send their Windows logs to the indexers. All Windows logs are saved in the 'windows-index.' However, sometimes, some of the Universal Forwarders are disconnected, and I have no logs from them in a period of time. How can I find which Universal Forwarders are disconnected? I must mention that the number of UFs is more than 400.
Hi, I have a table of time, machine, and total errors. I need to count for each machine how many times 3 errors (or more) happened in 5 min. if in one bucket more than 3 error happened I  sign thi... See more...
Hi, I have a table of time, machine, and total errors. I need to count for each machine how many times 3 errors (or more) happened in 5 min. if in one bucket more than 3 error happened I  sign this row as True.  finally i will return the frequency of 3 errors in 5 min (Summarize all rows==True) i succeeded in doing that in Python, but not in Splunk. i wrote the following code : | table TimeStamp,machine,totalErrors | eval time = strptime(TimeStamp, "%Y-%m-%d %H:%M:%S.%3N") | eval threshold=3 | eval time_window="5m" | bucket span=5m time | sort 0 machine,time | streamstats sum(totalErrors) as cumulative_errors by machine,time | eval Occurrence = if(cumulative_errors >= 3, "True", "False") | table machine,TimeStamp,Occurrence It almost correct. row 5 supposed to be True. If we calculate the delta time between row 1 to 5 more than 5 min passed, but if we calculate the delta time between row 2 to 5 less than 5 min passed  and number of errors >=3 errors. How to change it so it will find the delta time between each row (2 to 5 , 3 to 5,.. ) for each machine ? hope you understand. i need short and simple code because i will need to do that also for 1m,2m,.. 3,5,..errors row Machine TimeStamp Occurrence 1 machine1 12/14/2023 10:12:32     FALSE 2 machine1 12/14/2023 10:12:50 FALSE 3 machine1 12/14/2023 10:13:06 TRUE 4 machine1 12/14/2023 10:13:24 TRUE 5 machine1 12/14/2023 10:17:34 FALSE 6 machine1 12/16/2023 21:01:45 FALSE 7 machine2 12/18/2023 7:53:54 False thanks, Maayan
Hi,  I am getting the below error when i'm trying to configure the Webhook alert to post in Microsoft Teams.   12-19-2023 11:57:56.700 +0000 ERROR sendmodalert [292254 AlertNotifierWorker-0] - a... See more...
Hi,  I am getting the below error when i'm trying to configure the Webhook alert to post in Microsoft Teams.   12-19-2023 11:57:56.700 +0000 ERROR sendmodalert [292254 AlertNotifierWorker-0] - action=webhook STDERR - Error sending webhook request: HTTP Error 400: Bad Request   12-19-2023 11:57:56.710 +0000 INFO sendmodalert [292254 AlertNotifierWorker-0] - action=webhook - Alert action script completed in duration=706 ms with exit code=2   12-19-2023 11:57:56.710 +0000 WARN sendmodalert [292254 AlertNotifierWorker-0] - action=webhook - Alert action script returned error code=2
this is my end_time: 1703027679.5678809 After this query, it showed this output but i am getting the 1969 format | eval time=strftime(time, "%m/%d/%y %H:%M:%S")  But when i tried with time... See more...
this is my end_time: 1703027679.5678809 After this query, it showed this output but i am getting the 1969 format | eval time=strftime(time, "%m/%d/%y %H:%M:%S")  But when i tried with time instead of time it showed correct  | eval time=strftime(1703027679.5678809, "%m/%d/%y %H:%M:%S") | table time
Hi All, I am trying to send email using sendemail command with csv as an attachment . Email is getting sent successfully but file is getting named as "unknown-<date_time>". I want to rename this f... See more...
Hi All, I am trying to send email using sendemail command with csv as an attachment . Email is getting sent successfully but file is getting named as "unknown-<date_time>". I want to rename this file. Please let me know how we are doing this. | sendemail sendresults=true format=csv to=\"$email$\" graceful=false message="This is a test email" subject="Test Email Check" Also , message and subject is getting truncated. I am getting message body as "This" and Subject as "Test". Please help me to know what is going wrong. Help on : Renaming the csv file. How to avoid message body and subject getting truncated. I really appreciate your help on this Regards, PNV
Hello, I would like to separate my data streams by opening three receving ports. I have a multisite indexer cluster and I have created an app with this default inputs.conf file     [tcp://9998] ... See more...
Hello, I would like to separate my data streams by opening three receving ports. I have a multisite indexer cluster and I have created an app with this default inputs.conf file     [tcp://9998] disabled = 0 index = iscore_test sourcetype = iscore_test connection_host = ip [tcp://9999] disabled = 0 index = iscore_prod sourcetype = iscore_prod connection_host = ip     But when I check the receiving ports on the indexer it only shows the 9997 (that I would like to use just for splunk internal logs)   I think there is a faster way to do this rather than set the receiving ports manually in each indexer. I already checked and the app that I created was successfully copied to the indexers.  
Hi, this app is reporting one of my private apps is not compatible with Python 3. Issue:  File path designates Python 2 library. App:TA-LoRaWAN_decoders File Path:.../bin/br_uncompress.py Issue ... See more...
Hi, this app is reporting one of my private apps is not compatible with Python 3. Issue:  File path designates Python 2 library. App:TA-LoRaWAN_decoders File Path:.../bin/br_uncompress.py Issue No. Issues 1. Error while checking the script: Can't parse /opt/splunk/etc/apps/TA-LoRaWAN_decoders/bin/br_uncompress.py: ParseError: bad input: type=1, value='print', context=(' ', (24, 8))   Any suggestions as to what the issue is?
As a Splunk SME, I'm tasked to set up the ingestion of Salesforce Marketing Cloud transactional messages into Splunk. We're currently trying to utilize HTTP event collector (HEC) for this but we coul... See more...
As a Splunk SME, I'm tasked to set up the ingestion of Salesforce Marketing Cloud transactional messages into Splunk. We're currently trying to utilize HTTP event collector (HEC) for this but we couldn't get it to work because it's giving us this error: The Marketing Cloud developer I'm working with told me that in order to resolve the above error, we need to figure out how to "verify callbacks" from our end (Splunk) https://developer.salesforce.com/docs/marketing/marketing-cloud/guide/verifyCallback.html I need to know if there's a way to achieve that through HEC or if we need to take an entirely different approach to get the Marketing Cloud events to Splunk.
Hello, I’ve upgraded my FreeBSD server from 13.2-RELEASE to 14.0-RELEASE. Now, Splunk forwarder crashes when I try to start it. I made a clean install of the latest Splunk forwarder: same result. ... See more...
Hello, I’ve upgraded my FreeBSD server from 13.2-RELEASE to 14.0-RELEASE. Now, Splunk forwarder crashes when I try to start it. I made a clean install of the latest Splunk forwarder: same result. Any hint appreciated.     pid 8593 (splunkd), jid 0, uid 0: exited on signal 11 (no core dump - too large) pid 8605 (splunkd), jid 0, uid 0: exited on signal 11 (no core dump - too large)     edit: last lines of ktrace output 11099 splunkd NAMI "/opt/splunkforwarder/etc/system/default/authentication.conf" 11099 splunkd RET open 3 11099 splunkd CALL fstat(0x3,0x82352cf30) 11099 splunkd STRU struct stat {dev=10246920463185163261, ino=219, mode=0100600, nlink=1, uid=1009, gid=1009, rdev=18446744073709551615, atime=0, mtime=1699928544, ctime=1702914937.560528000, birthtime=1699928544, size=1301, blksize=4096, blocks=9, flags=0x800 } 11099 splunkd RET fstat 0 11099 splunkd CALL read(0x3,0x35c8bc0,0x1000) 11099 splunkd GIO fd 3 read 1301 bytes "# Version 9.1.2 # DO NOT EDIT THIS FILE! # Changes to default files will be lost on update and are difficult to …/… enablePasswordHistory = false passwordHistoryCount = 24 constantLoginTime = 0 verboseLoginFailMsg = true " 11099 splunkd RET read 1301/0x515 11099 splunkd CALL read(0x3,0x35c8bc0,0x1000) 11099 splunkd GIO fd 3 read 0 bytes "" 11099 splunkd RET read 0 11099 splunkd CALL close(0x3) 11099 splunkd RET close 0 11099 splunkd PSIG SIGSEGV SIG_DFL code=SEGV_MAPERR 11084 splunk RET wait4 11099/0x2b5b 11084 splunk CALL write(0x2,0x820c56800,0x2a) 11084 splunk GIO fd 2 wrote 42 bytes "ERROR: pid 11099 terminated with signal 11" 11084 splunk RET write 42/0x2a 11084 splunk CALL write(0x2,0x825106cf7,0x1) 11084 splunk GIO fd 2 wrote 1 byte " " 11084 splunk RET write 1 11084 splunk CALL exit(0x8)
Hello Splunkers! I'm trying to upgrade my Splunk Enterprise from 9.0.x to 9.1.x . After checking the release notes, I saw that I need to add the following: Reference link: https://docs.splunk.c... See more...
Hello Splunkers! I'm trying to upgrade my Splunk Enterprise from 9.0.x to 9.1.x . After checking the release notes, I saw that I need to add the following: Reference link: https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/AboutupgradingREADTHISFIRST  I have done that and proceeded to perform the upgrade, but I received an error regarding the UTF8 even though I added the required line. Any suggestions for what I might do to overcome this issue?
Hello, How can I enable mouse hover feature on column chart to show its data in Dashboard Studio? I have been searching for an answer but haven't found anything work. Many thanks,       ind... See more...
Hello, How can I enable mouse hover feature on column chart to show its data in Dashboard Studio? I have been searching for an answer but haven't found anything work. Many thanks,       index=web AND uri_path!="*.nsf*" AND uri_path!="*:443" | timechart span=1d dc(src_ip) by src_ip limit=0      
I want to make box-plot graph using my data. I try to find a solution, but it need to install app from file at splunk. So it couldn't apply at my Apps. (Because "my Apps" and "install app" is diffe... See more...
I want to make box-plot graph using my data. I try to find a solution, but it need to install app from file at splunk. So it couldn't apply at my Apps. (Because "my Apps" and "install app" is different) Is there any way to draw box-plot at my own Apps not using "install app form file" ?
We are trying to ingest large (peta bytes) information into Splunk.  The Events are in JSON file structure like - 'audit_events_ip-10-23-186-200_1.1512077259453.json' The pipeline is like -  JSON ... See more...
We are trying to ingest large (peta bytes) information into Splunk.  The Events are in JSON file structure like - 'audit_events_ip-10-23-186-200_1.1512077259453.json' The pipeline is like -  JSON files > Folder > UF > HF Cluster > Indexer Cluster   ~ UF - inputs.conf [batch:///folder] _TCP_ROUTING = p2s_au_hf crcSalt = <SOURCE> disabled = false move_policy = sinkhole recursive = false whitelist = \.json$   We are seeing the events from specific files (NOT all) are getting duplicated. It indexes from some file 2 times exactly.  As it is [batch:///] which suppose to delete the file after reading it & crcSalt=<SOURCE>, we are NOT able to figure out why & what creates the duplicates.  Would appreciate any help, reference or pointers. Thanks in advance!!!
Hello, I have issues getting expected field value pairs using following props and transforms configuration files. Sample events and my configuration files are given below. Any recommendation will be... See more...
Hello, I have issues getting expected field value pairs using following props and transforms configuration files. Sample events and my configuration files are given below. Any recommendation will be highly appreciated.   My Configuration Files [mypropsfile] REPORT-mytranforms=myTransfile [myTransfile] REGEX = ([^"]+?):\s+([^"]+?) FORMAT = $1::$2   Sample Events 2023-11-15T18:56:30.098Z, User ID: 90A, User Type: TempEMP,  Product Code:  pc, UAT:  UTA-True, Event Type:  TEST,  EventID:  Lookup, Remote Host: 25.191.157.244 2023-11-15T18:56:29.098Z, User ID: 90A, Host:  vx2tbax.dev, User Type: TempEMP,  Product Code:  pc, UAT:  UTA-True, Event Type:  TEST,  EventID:  Lookup, Remote Host: 25.191.157.244 2023-11-15T18:56:28.098Z, User ID: 91B, User Type:  TempEMP,  Product Code:  pc, UAT:  UTA-True, Event Type:  TEST,  EventID:  Lookup, Remote Host: 25.191.157.244 2023-11-15T18:56:27.098Z, User ID: 91B, User Type:  TempEMP,  Product Code:  pc, UAT:  UTA-True, Event Type:  TEST,  EventID:  Lookup, Remote Host: 25.191.157.244 2023-11-15T18:56:27.001Z, User ID: 91B, User Type:  TempEMP,  Host:  vx2tbax.dev, Product Code:  pc, UAT:  UTA-True, Event Type:  TEST,  EventID:  Lookup, Remote Host: 25.191.157.244  
I am attempting to ingest an XML file but am getting stuck can someone please help. The data will ingest if I remove "BREAK_ONLY_BEFORE =\<item\>"  but with a new event per item.   this is the XML ... See more...
I am attempting to ingest an XML file but am getting stuck can someone please help. The data will ingest if I remove "BREAK_ONLY_BEFORE =\<item\>"  but with a new event per item.   this is the XML and code I have tried   <?xml version="1.0" standalone="yes"?> <DocumentElement> <item> <hierarchy>ASA</hierarchy> <hostname>AComputer</hostname> <lastscandate>2023-12-17T11:08:21+11:00</lastscandate> <manufacturer>VMware, Inc.</manufacturer> <model>VMware7,1</model> <operatingsystem>Microsoft Windows 10 Enterprise</operatingsystem> <ipaddress>168.132.11.200</ipaddress> <vendor /> <lastloggedonuser>JohnSmith</lastloggedonuser> <totalcost>0.00</totalcost> </item> <item> <hierarchy>ASA</hierarchy> <hostname>AComputer</hostname> <lastscandate>2023-12-17T12:20:21+11:00</lastscandate> <manufacturer>Hewlett-Packard</manufacturer> <model>HP Compaq Elite 8300 SFF</model> <operatingsystem>Microsoft Windows 8.1 Enterprise</operatingsystem> <ipaddress>168.132.136.160</ipaddress> <vendor /> <lastloggedonuser>JohnSmith</lastloggedonuser> <totalcost>0.00</totalcost> </item> <item> <hierarchy>ASA</hierarchy> <hostname>AComputer</hostname> <lastscandate>2023-12-17T11:54:28+11:00</lastscandate> <manufacturer>HP</manufacturer> <model>HP EliteBook 850 G5</model> <operatingsystem>Microsoft Windows 10 Enterprise</operatingsystem> <ipaddress>168.132.219.32, 192.168.1.221</ipaddress> <vendor /> <lastloggedonuser>JohnSmith</lastloggedonuser> <totalcost>0.00</totalcost> </item> <item> <hierarchy>ASA</hierarchy> <hostname>AComputer</hostname> <lastscandate>2023-12-17T11:50:20+11:00</lastscandate> <manufacturer>VMware, Inc.</manufacturer> <model>VMware7,1</model> <operatingsystem>Microsoft Windows 10 Enterprise</operatingsystem> <ipaddress>168.132.11.251</ipaddress> <vendor /> <lastloggedonuser>JohnSmith</lastloggedonuser> <totalcost>0.00</totalcost> </item>   Inputs.conf [monitor://D:\SplunkImportData\SNOW\*.xml] sourcetype=snow:all:devices index=asgmonitoring disabled = 0   Props.conf [snow:all:devices] KV_MODE=xml BREAK_ONLY_BEFORE =\<item\> SHOULD_LINEMERGE = false DATETIME_CONFIG = NONE
Every time I create a table visualization, I notice that the value 0 is always aligned on the left side while the rest is aligned on the right side. (322, 3483,0,0 is in the same column) Is ther... See more...
Every time I create a table visualization, I notice that the value 0 is always aligned on the left side while the rest is aligned on the right side. (322, 3483,0,0 is in the same column) Is there any reason behind it and any way to fix this? Thanks!  
Hi, We are ingesting Azure NSG flow logs and visualizing them using app Microsoft Azure App for Splunk https://splunkbase.splunk.com/app/4882 Data is in JSON format with multiple levels/records in ... See more...
Hi, We are ingesting Azure NSG flow logs and visualizing them using app Microsoft Azure App for Splunk https://splunkbase.splunk.com/app/4882 Data is in JSON format with multiple levels/records in a single event. Each record can have multiple flows, flow tuples etc. Adding few screenshots here to give the context. Default extractions for the main JSON fields look fine. But when it comes to values within the flow tuple field, i.e. records{}.properties.flows{}.flows{}.flowTuples{}, Splunk only keeps values from the very first entry. How can I make these src_ip, dest_ip fields also get multiple values(across all records/flow tuples etc)   Splunk extracts values only from that first highlighted entry Here is the extraction logic from this app.    [extract_tuple] SOURCE_KEY = records{}.properties.flows{}.flows{}.flowTuples{} DELIMS = "," FIELDS = time,src_ip,dst_ip,src_port,dst_port,protocol,traffic_flow,traffic_result     Thanks,
We use the free version of syslog-ng, and recently we had a requirement to have TLS on top of TCP, and we don't have the knowledge to implement it. Therefore we wonder whether anybody has migrated fr... See more...
We use the free version of syslog-ng, and recently we had a requirement to have TLS on top of TCP, and we don't have the knowledge to implement it. Therefore we wonder whether anybody has migrated from syslog-ng to SC4S to assist with more advanced requirements such as TCP/TLS? https://splunkbase.splunk.com/app/4740 If so, what lessons learned and motivations to do that?
Where is the data from the Splunk Enterprise Security (ES) Investigation Panel stored? In the previous version, it seemed to be stored in a KV lookup, but I can't find it in the current 7.x version.... See more...
Where is the data from the Splunk Enterprise Security (ES) Investigation Panel stored? In the previous version, it seemed to be stored in a KV lookup, but I can't find it in the current 7.x version. I understand that the Notable index holds information related to incidents from the Incident Review Dashboard. How can we map Splunk Notables and their Investigations together to generate a comprehensive report in the current 7.x ES version?