All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I tried to use  sourcetype=<sourcetypename> |rex field=_raw "(?<TLD>\.\w+?)(?:$|\/)" | table TLD It returned TLDs but included values I think maybe part of IPs e.g. .33, .136, .74 etc. 
Hi @ITWhisperer  Thank you very much for your prompt help. Thank you
I need a query that extracts TLDs from events and compares the results with a lookup table with blocklisted TLDs
Hi If you have some fields with and without the ., below is an example of how to get that to work. However it only works going into an event index, it does not seem to work going into metrices. [t... See more...
Hi If you have some fields with and without the ., below is an example of how to get that to work. However it only works going into an event index, it does not seem to work going into metrices. [test_abc_transforms] CLEAN_KEYS = false DELIMS=, FIELDS=degraded.threshold,down.threshold [drop_header] REGEX = metric_timestamp,metric_name,_value,degraded\.threshold,down\.threshold DEST_KEY = queue FORMAT = nullQueue     metric_timestamp,metric_name,_value,degraded.threshold,down.threshold 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300  
| streamstats count as row | eventstats max(row) as total | eval cuttoff=total/10 | where row <= cuttoff
Hello To find and fix CSV header errors in multiple files, write a script to check for duplicate column names and invalid fields in the header row. Then, run the script on your CSV file directory. ... See more...
Hello To find and fix CSV header errors in multiple files, write a script to check for duplicate column names and invalid fields in the header row. Then, run the script on your CSV file directory. For Python, a basic example might look like this:   import csv import os def check_csv_headers(file_path): with open(file_path, 'r') as csvfile: csvreader = csv.DictReader(csvfile) headers = csvreader.fieldnames if len(headers) != len(set(headers)): print(f"Duplicate columns in: {file_path}") if '' in headers: print(f"Invalid field name in: {file_path}") # Directory containing CSV files directory = '/path/to/csv/files' for filename in os.listdir(directory): if filename.endswith('.csv'): file_path = os.path.join(directory, filename) check_csv_headers(file_path) Save the script to a file, make it executable (if needed), and run it against your directory containing the CSV files. python check_csv_headers.py This approach automates the process of scanning your CSV files for errors and should help you efficiently locate and fix these issues across multiple files within your Splunk Heavy Forwarder. You can also check https://community.splunk.com/t5/Knowledge-Management/bd-p/knowledge-management/CCSP Certification Thank you
Hello All, I need your help for using head command by passing the parameters at run time. The background of the above is as follows: - I am working on building a SPL to identify anomalous events i... See more...
Hello All, I need your help for using head command by passing the parameters at run time. The background of the above is as follows: - I am working on building a SPL to identify anomalous events in time series dataset. I fetched average of the count of all events each hour and then compared with moving average to identify the datapoints and time instances when the average count of events during an hour were significantly greater than moving average at that point of time. To understand the term "significantly greater", I computed the difference between average count of events for a day and the moving average up to the day and determine the percentage of difference with respect to moving average. The observations varied across different datasets. That is for dataset 1, out of all events, 10% of the events had percentage of difference>=90%. However, for dataset 2, out of all events, 20% of the events had percentage of difference>=90%. Thus, I decided to sort the results in descending order of percentage of difference and fetched first 10% of the total events by using head command. Since the count of events returned varies for each dataset, how to compute and fetch 10% of events when the dataset is sorted in descending order based on percentage of difference? Please share if I need to clarify or share any more details to articulate the above query better. Thank you Taruchit
Here is one .conf presentation about using TERM and PREFIX https://conf.splunk.com/files/2021/slides/TRU1133B.pdf There are also couple of other which you should read to full understand what TERM act... See more...
Here is one .conf presentation about using TERM and PREFIX https://conf.splunk.com/files/2021/slides/TRU1133B.pdf There are also couple of other which you should read to full understand what TERM actually means and how to use it.
Summary: On a CentOS Stream 9 system, after installing Splunk in /opt/splunk and configuring it to start on boot with systemd, I've noticed unusual behavior. Using manual Splunk commands (/opt/splun... See more...
Summary: On a CentOS Stream 9 system, after installing Splunk in /opt/splunk and configuring it to start on boot with systemd, I've noticed unusual behavior. Using manual Splunk commands (/opt/splunk/bin/splunk [start | stop | restart]) alters the Splunkd.service file in /etc/systemd/system/, creating a timestamped backup. This change prevents Splunk from starting using systemctl commands and consequently on boot, defeating the purpose of the systemd setup. Using chattr to make the service file immutable is a current workaround. This behavior seems specific to CentOS Stream 9. How to recreate issue: On a centos stream 9 machine, installed splunk under /opt/splunk, and run splunk as user 'splunk'. Enable boot-start with systemd-managed 1, after stopping Splunk. After enabling boot-start, a file will be created at /etc/systemd/system/Splunkd.service. Starting and stopping splunk using systemctl works fine, and normal. However, if you run sudo /opt/splunk/bin/splunk [start | stop | restart], splunk itself will change the/etc/systemd/system/Splunkd.service, and create a backup with a timestamp, e.g. Splunkd.service_2023_09_21_06_49_05. When trying to start with systemctl again: e.g. sudo systemctl start Splunkd     Failed to start Splunkd.service: Unit Splunkd.service failed to load properly, please adjust/correct and reload service manager: Device or resource busy See system logs and 'systemctl status Splunkd.service' for details.     This will lead to Splunk not starting after reboot, which is the whole point of enabling systemd.   This error message shows up, because the Splunkd.service file has been altered. To get systemctl working again, i run sudo systemctl daemon-reload But as soon as one tries to do a manual start|stop|restart command, the same issue arises.   When diffing the new service file and old service file: diff Splunkd.service Splunkd.service_2023_09_21_06_49_05     26c26 < MemoryLimit=3723374592 --- > MemoryLimit=3723378688     memoryLimit is the only value that is changed for each subsequent 'backup' of the service file. It just switches between these two values   Mr chat.gpt suggested to make the service file non-immutable with sudo chattr +i /etc/systemd/system/Splunkd.service After this change, whenever doing manual start | stop | restart, you get a WARNING message: But it won't **bleep** up your Service file, and hence splunk will start after reboot.  So it is Splunk itself who is changing the Service file. However, this issue was discovered in Centos Stream 9, and cannot be replicated in earlier versions. Anybody know what may have caused this weird error?
HI,  I am trying to learn more about the certificates found within the document /etc/auth/appsCA.pem . I'm referring to Splunk's default certificates, Global Sign Root CA, Global Sign ECC, Digi... See more...
HI,  I am trying to learn more about the certificates found within the document /etc/auth/appsCA.pem . I'm referring to Splunk's default certificates, Global Sign Root CA, Global Sign ECC, DigiCert Global Root, ISRG Root, IdenTrust Commercial Root. Are they safe? After changing the certificate configuration with my self-signed certificates and merging the CA Splunk certificates to make Splunkbase work properly ( This case here ), I wondered if they are all necessary for Splunk to work successfully or only some of them. Is there a documentation page or can someone explain the use of each of the certificates? Thanks in advance, 
Hi all, I have migrated a 9.0.4 HF from a Windows Server 2012 to a Window server 2022. The original connector was working fine, while the new one (with the same settings) keeps crashing. This is the... See more...
Hi all, I have migrated a 9.0.4 HF from a Windows Server 2012 to a Window server 2022. The original connector was working fine, while the new one (with the same settings) keeps crashing. This is the error I got almors every minute on Application event viewer: Faulting application name: splunk-winevtlog.exe, version: 2305.256.25832.56887, time stamp: 0x64e8dfcc Faulting module name: ntdll.dll, version: 10.0.20348.1970, time stamp: 0x31881ea2 Exception code: 0xc0000374 Fault offset: 0x0000000000104909 Faulting process id: 0x1304 Faulting application start time: 0x01d9ed2bd5be870c Faulting application path: C:\Program Files\Splunk\bin\splunk-winevtlog.exe Faulting module path: C:\Windows\SYSTEM32\ntdll.dll Report Id: 45c2b6fd-2c6e-484d-9602-eb948052101d Faulting package full name: Faulting package-relative application ID:   I tried to upgrade the HF to version 9.0.6 and then to version 9.1.1 but the error persist. It seems to be caused by the inputs configured on Splunk_TA_windows (version 8.7.0 installed). This is the enabled inputs that cause the issue: [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist3 = 4656,4658,4690,5031,5140,5150,5151,5154,5155,5156,5157,5158,5159 renderXml = false index = wineventlog ###### Forwarded WinEventLogs (WEF) ###### [WinEventLog://ForwardedEvents] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 ## The addon supports only XML format for the collection of WinEventLogs using WEF, hence do not change the below renderXml parameter to false. renderXml = true host = WinEventLogForwardHost index = wineventlog   The only solution I found is to disable the ForwardedEvents input. This way the HF works as expected. I also tried to set current_only=1 on that input with no luck. Does anyone knows if it's a know issue and how to troubleshoot this? Regards Alessandro
Hi,   So this error was due to the Admin user being deleted and the owner being set to admin still. I managed to fix this by changing the owner but after talking to PS deleting Admin is a mistake a... See more...
Hi,   So this error was due to the Admin user being deleted and the owner being set to admin still. I managed to fix this by changing the owner but after talking to PS deleting Admin is a mistake as so many services rely on it that it has been changed back.
Hello Yuanliu, Thanks once again for your efforts, Yes i did add the quotes , basically I copy pasted from here to search directly.  Have you tested this at your end by any chance   Thanks,  
try to configure it with the user SYSTEM. if the issue persists, check the local logs of the universal forwarder located in c:\program files\splunk\var\log\splunkd.log
in my case the issue was the user running the splunk universal forwarder service. open  the services manager and check that, it should be SYSTEM or any user with local admin rights
Also, when I am trying to click "open in search" directly from lookup editor. it gives me prompt with below error: This lookup doesn't have a transforms.conf entry and needs one in order to be acces... See more...
Also, when I am trying to click "open in search" directly from lookup editor. it gives me prompt with below error: This lookup doesn't have a transforms.conf entry and needs one in order to be accessible in search.
Recently I have created a KVlookup using transforms .conf and collection.conf to get data from API. When KVlookup is manually opened, we can see the data but when searching on search head using inp... See more...
Recently I have created a KVlookup using transforms .conf and collection.conf to get data from API. When KVlookup is manually opened, we can see the data but when searching on search head using inputlookup, it doesn't show anything. Permissions has also been granted. Is there anything else, which we are missing?
So, you want trellis breakdown on SectionName, not on Attribute.  This says even more of the importance of illustrating data input and desired output.  Really all of this should be in text.  Anyway, ... See more...
So, you want trellis breakdown on SectionName, not on Attribute.  This says even more of the importance of illustrating data input and desired output.  Really all of this should be in text.  Anyway, I mocked up three SectionName values based off what you showed in both text and screenshots, and revised https://gist.github.com/whackyhack/af1d0b20e297a5237594cbb6aaacc0f6 to mock trellis by SectionName.   Needless to say: You need to know all possible values of SectionName in advance. (And if you want the tables to be in alphabetic order, you need to arrange all of them in that order manually.)  The token setting is based on a search where only two SectionName values are returned, namely HTTP Plugin settings and Transport chain: WCInboundDefaultSecure:Channel HTTP.  Therefore table for Process Definition is hidden. Core search to set token by SessionName should be simply     index = websphere_cct (Object= "HJn5server1" Env="Prod") OR (Object = "HJn7server3" Env="UAT") [ search index=websphere_cct SectionName=* | dedup Order | table Order ] | stats values(SectionName) as SectionName       Also note that token name cannot contain colon (:); for this reason, I used $Transport chain- WCInboundDefaultSecure-Channel HTTP$ as token name.  Again this mock trellis requires you to know all possible values of SectionName in advance, and manually program all <table/>  For you to more easily compare to real data, here is the data emulation code     | makeresults | eval data = mvappend("{\"ObjectType \":\"AppServer\",\"Object\":\"HJn7server3\",\"Order\":\"147\",\"Env\":\"UAT\",\"SectionName\":\"Transport chain: WCInboundDefaultSecure:Channel HTTP\", \"Attributes\":{\"discriminationWeight\": \"10\",\"enableLogging\": \"FALSE\",\"keepAlive\": \"TRUE\",\"maxFieldSize\": \"32768\",\"maxHeaders\": \"500\",\"maxRequestMessageBodySize\": \"-1\",\"maximumPersistentRequests\": \"100\",\"name\": \"HTTP_4\",\"persistentTimeout\": \"30\",\"readTimeout\": \"60\",\"useChannelAccessLoggingSettings\": \"FALSE\",\"useChannelErrorLoggingSettings\": \"FALSE\",\"useChannelFRCALoggingSettings\": \"FALSE\",\"writeTimeout\": \"60\"}}", "{\"ObjectType \":\"AppServer\",\"Object\":\"HJn5server1\",\"Order\":\"147\",\"Env\":\"UAT\",\"SectionName\":\"Transport chain: WCInboundDefaultSecure:Channel HTTP\", \"Attributes\":{\"discriminationWeight\": \"10\",\"enableLogging\": \"FALSE\",\"keepAlive\": \"TRUE\",\"maxFieldSize\": \"32768\",\"maxHeaders\": \"500\",\"maxRequestMessageBodySize\": \"-1\",\"maximumPersistentRequests\": \"100\",\"name\": \"HTTP_4\",\"persistentTimeout\": \"30\",\"readTimeout\": \"60\",\"useChannelAccessLoggingSettings\": \"FALSE\",\"useChannelErrorLoggingSettings\": \"FALSE\",\"useChannelFRCALoggingSettings\": \"FALSE\",\"writeTimeout\": \"60\"}}", "{\"ObjectType \":\"AppServer\",\"Object\":\"HJn7server3\",\"Order\":\"147\",\"Env\":\"PROD\",\"SectionName\":\"Process Definition\", \"Attributes\":{\"IBM_HEAPDUMP_OUTOFMEMORY\": \"\",\"executableArguments\": \"[]\",\"executableTarget\": \"com.ibm.ws.runtime.WsServer\",\"executableTargetKind\": \"JAVA_CLASS\",\"startCommandArgs\": \"[]\",\"stopCommandArgs\": \"[]\",\"terminateCommandArgs\": \"[]\",\"workingDirectory\": \"${USER_INSTALL_ROOT}\"}}", "{\"ObjectType \":\"AppServer\",\"Object\":\"HJn5server1\",\"Order\":\"147\",\"Env\":\"UAT\",\"SectionName\":\"Process Definition\", \"Attributes\":{\"IBM_HEAPDUMP_OUTOFMEMORY\": \"\",\"executableArguments\": \"[]\",\"executableTarget\": \"com.ibm.ws.runtime.WsServer\",\"executableTargetKind\": \"JAVA_CLASS\",\"startCommandArgs\": \"[]\",\"stopCommandArgs\": \"[]\",\"terminateCommandArgs\": \"[]\",\"workingDirectory\": \"${USER_INSTALL_ROOT}\"}}", "{\"ObjectType \":\"AppServer\",\"Object\":\"HJn7server3\",\"Order\":\"147\",\"Env\":\"PROD\",\"SectionName\":\"HTTP Plugin settings\", \"Attributes\":{\"ConnectTimeout\": 5,\"MaxConnections\": -1,\"Role\": \"PRIMARY\",\"ExtendedHandshake\": false,\"ServerIOTimeout\": 900,\"waitForContinue\": false}}", "{\"ObjectType \":\"AppServer\",\"Object\":\"HJn5server1\",\"Order\":\"147\",\"Env\":\"PROD\",\"SectionName\":\"HTTP Plugin settings\", \"Attributes\":{\"ConnectTimeout\": 5,\"MaxConnections\": -1,\"Role\": \"PRIMARY\",\"ExtendedHandshake\": false,\"ServerIOTimeout\": 900,\"waitForContinue\": false}}") | mvexpand data | rename data as _raw ``` above emulates index = websphere_cct (Object= "HJn5server1" Env="Prod") OR (Object = "HJn7server3" Env="UAT") [ search index=websphere_cct SectionName=* | dedup Order | table Order ] ```     or _raw {"ObjectType ":"AppServer","Object":"HJn7server3","Order":"147","Env":"UAT","SectionName":"Transport chain: WCInboundDefaultSecure:Channel HTTP", "Attributes":{"discriminationWeight": "10","enableLogging": "FALSE","keepAlive": "TRUE","maxFieldSize": "32768","maxHeaders": "500","maxRequestMessageBodySize": "-1","maximumPersistentRequests": "100","name": "HTTP_4","persistentTimeout": "30","readTimeout": "60","useChannelAccessLoggingSettings": "FALSE","useChannelErrorLoggingSettings": "FALSE","useChannelFRCALoggingSettings": "FALSE","writeTimeout": "60"}} {"ObjectType ":"AppServer","Object":"HJn5server1","Order":"147","Env":"UAT","SectionName":"Transport chain: WCInboundDefaultSecure:Channel HTTP", "Attributes":{"discriminationWeight": "10","enableLogging": "FALSE","keepAlive": "TRUE","maxFieldSize": "32768","maxHeaders": "500","maxRequestMessageBodySize": "-1","maximumPersistentRequests": "100","name": "HTTP_4","persistentTimeout": "30","readTimeout": "60","useChannelAccessLoggingSettings": "FALSE","useChannelErrorLoggingSettings": "FALSE","useChannelFRCALoggingSettings": "FALSE","writeTimeout": "60"}} {"ObjectType ":"AppServer","Object":"HJn7server3","Order":"147","Env":"PROD","SectionName":"Process Definition", "Attributes":{"IBM_HEAPDUMP_OUTOFMEMORY": "","executableArguments": "[]","executableTarget": "com.ibm.ws.runtime.WsServer","executableTargetKind": "JAVA_CLASS","startCommandArgs": "[]","stopCommandArgs": "[]","terminateCommandArgs": "[]","workingDirectory": "${USER_INSTALL_ROOT}"}} {"ObjectType ":"AppServer","Object":"HJn5server1","Order":"147","Env":"UAT","SectionName":"Process Definition", "Attributes":{"IBM_HEAPDUMP_OUTOFMEMORY": "","executableArguments": "[]","executableTarget": "com.ibm.ws.runtime.WsServer","executableTargetKind": "JAVA_CLASS","startCommandArgs": "[]","stopCommandArgs": "[]","terminateCommandArgs": "[]","workingDirectory": "${USER_INSTALL_ROOT}"}} {"ObjectType ":"AppServer","Object":"HJn7server3","Order":"147","Env":"PROD","SectionName":"HTTP Plugin settings", "Attributes":{"ConnectTimeout": 5,"MaxConnections": -1,"Role": "PRIMARY","ExtendedHandshake": false,"ServerIOTimeout": 900,"waitForContinue": false}} {"ObjectType ":"AppServer","Object":"HJn5server1","Order":"147","Env":"PROD","SectionName":"HTTP Plugin settings", "Attributes":{"ConnectTimeout": 5,"MaxConnections": -1,"Role": "PRIMARY","ExtendedHandshake": false,"ServerIOTimeout": 900,"waitForContinue": false}}
Hi @Madhusmita, you can attach to an email only one csv, not more, so, the only way (when possible) is to append the results of several searches, but readablity could be very low if they are diferen... See more...
Hi @Madhusmita, you can attach to an email only one csv, not more, so, the only way (when possible) is to append the results of several searches, but readablity could be very low if they are diferent. I hint to create more alerts, one for each csv. Ciao. Giuseppe
I want to automate the uploading of a lookup file but at first I have to upload it to staging area. The staging area which is a public area or only admin can access it in an organization. So is there... See more...
I want to automate the uploading of a lookup file but at first I have to upload it to staging area. The staging area which is a public area or only admin can access it in an organization. So is there any api or anyone can help please so that I can automate this looup uploading directly from a user designated directory. I tried but it gives me this output ERROR:root:[failed] file: 'prices.csv', status:503, reason:Service Unavailable, url:http://localhost:8000/services/data/lookup-table-files/