All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to know where should we install "SolarWinds Add-on Splunk", I mean on deployment server, search head or indexers? Also I want to know if this Add-on can get logs like when they are sent from s... See more...
I want to know where should we install "SolarWinds Add-on Splunk", I mean on deployment server, search head or indexers? Also I want to know if this Add-on can get logs like when they are sent from syslog server.
I have a below JSON Recalibration Stats json : {"modelid" : "30013", "champion_gini" : 0.8274502273019728, "recalibResult" : "CASE I Champion Retained", "challenger_gini" : 0.8013221831674033, "re... See more...
I have a below JSON Recalibration Stats json : {"modelid" : "30013", "champion_gini" : 0.8274502273019728, "recalibResult" : "CASE I Champion Retained", "challenger_gini" : 0.8013221831674033, "recalibDate" : "2020-05-01"}   Now to get the JSON fields I have to explicitly mention field names using table/fields. My JSON can get different fields from source so I want to get only parsed fields from JSON by not explicitly mentioning their names. Using below query is giving me all the unwanted fields as well. index = abx sourcetype = gmdevops_rome source="/axp/gnics/orchestra/dev/romedata/logs/model_run_qc.log" "Recalibration Stats json" | rex field=_raw "Recalibration Stats json : (?<recalib_stats>.+)" | spath input=recalib_stats | table *
Hi all,   I'm new to splunk searches and would appreciate some help to find out how to pull out the file path, file name and file extension from the message field(example below) The message has ve... See more...
Hi all,   I'm new to splunk searches and would appreciate some help to find out how to pull out the file path, file name and file extension from the message field(example below) The message has verbose text and the path occurs twice within the text. In this example I'd be looking to extract from within the text the file path, file name and file extension and present them in a four column table along with the the time of the event.    Thanks in advance!     Message=Code Integrity determined that a process (\Device\HarddiskVolume1\Program Files\SplunkUniversalForwarder\bin\splunkd.exe) attempted to load \Device\HarddiskVolume1\Program Files\SplunkUniversalForwarder\bin\splunk-netmon.exe that did not meet the Enterprise signing level requirements or violated code integrity policy (Policy ID:{a244370e-44c9-4c06-b551-f6016e563076}). However, due to code integrity auditing policy, the image was allowed to load.  
i want to remove the spaces of the leading and trailing of my field. I am trying to use trim and below Rex both are not working for me. |eval NewField=trim(OldField) | rex field=myField mode=sed "... See more...
i want to remove the spaces of the leading and trailing of my field. I am trying to use trim and below Rex both are not working for me. |eval NewField=trim(OldField) | rex field=myField mode=sed "s/(^\s+)|(\s+$)//g" My data EX:(dots is to show the space) TR# CR1K901395........ BT1K901394 CT2K901398         KMK901397......... NHK901393 |eval TRIM=trim(TR#) this is throwing me an error. Cant we use TRIM function to trim the whole field? any suggestions on how can i remove the spaces of the whole field?
1.How can I extract timestamp to correct time as following ? 2020/12/29 下午 02:39:45    "下午" means  PM   ==> 2020/12/29  14:39:45  2020/12/29 上午 05:15:08      "上午" means AM   ==> 2020/12/29  05:15:0... See more...
1.How can I extract timestamp to correct time as following ? 2020/12/29 下午 02:39:45    "下午" means  PM   ==> 2020/12/29  14:39:45  2020/12/29 上午 05:15:08      "上午" means AM   ==> 2020/12/29  05:15:08 2.If splunk can't recognize Chinese character, I change the time "下午" to PM and  "上午" to AM manually, can I extract timestamp as following?       I use "%Y/%m/%d %p %I:%M:%S" to extract time, but it fails. 2020/12/29 PM 02:39:45   ==> 2020/12/29  14:39:45  2020/12/29 AM 05:15:08   ==> 2020/12/29  05:15:08   
Currently, I already filter the Windows event logs for only Windows Security logs. However, windows logs have take up majority of splunk license usage and we are working to reduce the windows logs in... See more...
Currently, I already filter the Windows event logs for only Windows Security logs. However, windows logs have take up majority of splunk license usage and we are working to reduce the windows logs ingestion by implementing the best practise for windows monitoring for security purpose. Does anyone have  a best practice to reduce the windows logs size based on event code? Does anyone can provide the list of Windows event code that should be digest and event code that do not need to be digest by Splunk for Security purpose? Your help is very appreciate. Thanks in advance, Fatihah
Currently our index= windows host not reporting from last couple of days.   Need query to set up alert if log sources are not reporting to splunk.
Hello, I want to send a search result to my email in Excel readable form for reporting. But the problem is our company Splunk is locate on a jump server which do not have access to the Internet, so t... See more...
Hello, I want to send a search result to my email in Excel readable form for reporting. But the problem is our company Splunk is locate on a jump server which do not have access to the Internet, so the Alert can only parse to a Email server and then send email as an alert email. So can I send 1 time search result to my email as a 1 time alert and how to archive that in the setting ?
Hi All, Great little app Speedtest (https://splunkbase.splunk.com/app/3530/) which has been working perfectly for the last few years. Until my ISP upgraded the network last week and now my upload me... See more...
Hi All, Great little app Speedtest (https://splunkbase.splunk.com/app/3530/) which has been working perfectly for the last few years. Until my ISP upgraded the network last week and now my upload metrics are off. Only when Splunk runs the script. If I manually run it it works perfectly. It's super odd. Whenever Splunk runs it as a scheduled scripted input the upload value returns ~3.5Mbps. When I manually run the script from the same server it returns ~21Mbps   {"client": {"rating": "0", "loggedin": "0", "isprating": "3.7", "ispdlavg": "0", "ip": "193.116.81.55", "isp": "TPG Internet", "lon": "153.0215", "ispulavg": "0", "country": "AU", "lat": "-27.4732"}, "bytes_sent": 27787264, "download": 251570694.55484477, "timestamp": "2021-02-07T23:46:05.028139Z", "share": null, "bytes_received": 315446696, "ping": 16.562, "upload": 21321234.56928962, "server": {"latency": 16.562, "name": "Brisbane", "url": "http://brs1.speedtest.telstra.net:8080/speedtest/upload.php", "country": "Australia", "lon": "153.0278", "cc": "AU", "host": "brs1.speedtest.telstra.net:8080", "sponsor": "Telstra", "lat": "-27.4728", "id": "2604", "d": 0.62311775977947}}       Running it from desktop it returns ~21Mbps  - https://www.speedtest.net/result/10890590348 I've tried playing around with the schedule but it doesn't seem to help. Any ideas? This was working just fine until my ISP upgraded the network last week. Previously I have 50Mbps down and 23Mbps up and the scripted input was accurately reflecting the measurements.   Pinging author @markhill1  in case he has any ideas.
I am collecting logs every 5 seconds using a script. However, script execution is suddenly stopped. Why does the script stop? Is it a Splunk bug? My version is 7.3.3.
Good Afternoon Everyone,   I am an ISSO who just inherited a Splunk environment. I have been leaning heavily on this community and i have received lots of great feed back in regards to different do... See more...
Good Afternoon Everyone,   I am an ISSO who just inherited a Splunk environment. I have been leaning heavily on this community and i have received lots of great feed back in regards to different documents.  My latest problem is that my dispatch directory is nearing capacity and only has 3 out of 5 GB left so therefore i can't conduct any new searches or get dashboards everything is at a standstill.   I am aware i can use a command to clear artifacts from the dispatch directory and I am aware there are ways to allocate more space or re-direct the dispatch directory...but what I am truly worried about is am I going to lose information by clearing the dispatch directory of artifacts?    I am concerned about losing security related data or auditable events. is there any one who can break down what exactly a search artifact in Splunk contains? and is it something I need to have on hand for security purposes down the road? I feel if i can show my colleagues what a search artifact is and perhaps why we dont need to worry about deleting it (OR WORRY ) than i can proceed forward... I don't exactly have my organization telling me I need to keep the artifacts but that doesn't mean i shouldn't err on the side of caution.  ANY HELP is greatly appreciated.  More Info:   All we care about is auditing the devices connected to Splunk by way of queries and dashboards. as long as that data is not compromised we are good.
Hello! I have a training with Splunk Phantom starting tomorrow morning and my approval is still pending. I need the OVA to download Phantom and keep up with the training. Could someone from @sam_splu... See more...
Hello! I have a training with Splunk Phantom starting tomorrow morning and my approval is still pending. I need the OVA to download Phantom and keep up with the training. Could someone from @sam_splunk please assist? Similar issue: https://community.splunk.com/t5/Splunk-Phantom/Splunk-Phantom-Community-Edition-Registration-approval-pending/m-p/498442
Ok not sure if in the right section.   So I have been using Zeek for Splunk and TA_suricata and we are getting a lot of IPs of course.   And I built out some IPs and CIDR in csv.  What is the best w... See more...
Ok not sure if in the right section.   So I have been using Zeek for Splunk and TA_suricata and we are getting a lot of IPs of course.   And I built out some IPs and CIDR in csv.  What is the best way to add into the app or should it be a seperate lookup that could be used anywhere?    Not sure if there is differance between IP lookup vs CIDR lookup.    Was also thinking of merging the apps in to one app, but that might be another question for a later day.   Thanks
Dear Splunk community, I have a Python application that pushes data to Splunk every time is executed. Multiple events are pushed using JSON format. Only a subset of the data being sent, namely two f... See more...
Dear Splunk community, I have a Python application that pushes data to Splunk every time is executed. Multiple events are pushed using JSON format. Only a subset of the data being sent, namely two fields are changing during job execution, the rest are constant per job execution (think of them as some sort of job metadata). I would like to have that metadata in splunk so I can filter it, but I do not like also pushing lots of identical data for each event.  I guess what I am looking for is some sort of bulk tagging after each import where each job metadata field would be a label.   I appreciate any thoughts/suggestions how to do this usinng splunk BKMs.
We have a game and login log. I want to anyalize the people that login today and don't login tommorow, which is to analyze what effect the 1-day retention. BUT, I can't find these leaved people. I th... See more...
We have a game and login log. I want to anyalize the people that login today and don't login tommorow, which is to analyze what effect the 1-day retention. BUT, I can't find these leaved people. I think maybe I can use NOT command or JOIN INNER command, however I failed.
Hi, i am trying out the unix and linux addon on my linux machines.  I enabled the iostat_metric monitor an see high io values. Often about 100 till 2000 iops. But thats wrong. Its  test machine and... See more...
Hi, i am trying out the unix and linux addon on my linux machines.  I enabled the iostat_metric monitor an see high io values. Often about 100 till 2000 iops. But thats wrong. Its  test machine and also vcenter monitoring shows only 1-5 iops and after enabling collectd monitoring, collectd also shows low iops.  If i run the iostat_metric.sh script manually, it shows also some time those high iops. Running iostat manually, the iops are low and right. Is there a known bug in parsing the output from iostat? Does someknow see same behavior.
By using Splunk python SDK API, is there a way to find list of all Splunk alerts which have not been triggered for specified period of time? The basic idea is to run such report on regular basis and ... See more...
By using Splunk python SDK API, is there a way to find list of all Splunk alerts which have not been triggered for specified period of time? The basic idea is to run such report on regular basis and alert users on alerts which are just sitting there for specified period of time so that those alerts can be modified etc. Thanks
Hello I wanted to request some assistance with the topic of combining different searches from the same index and same sourcetype but different sources into a table or report even. I struggle with t... See more...
Hello I wanted to request some assistance with the topic of combining different searches from the same index and same sourcetype but different sources into a table or report even. I struggle with the concept of combining them. I have researched joins, stats, charts etc. but I am trying to implement them and am getting errors for which I am missing a point making me unsure of how to combine effectively to get the results I need.  So any guidance or information that may assist me to learn properly would be very helpful. I have the following separate searches that give me the results I need: ==================================== Storage index="SRV" sourcetype=WinHostMon source=disk DriveType=fixed TotalSpaceKB="*" | eval TotalSpaceKB = round (TotalSpaceKB/100000000) | stats sum(TotalSpaceKB) as "TotalSpace (GB)" by host OS index="SRV" sourcetype=WinHostMon source=operatingsystem os="*" | dedup host | table host os CPU index="SRV" sourcetype=WinHostMon source=processor NumberOfProcessors="*" | dedup host | table host NumberOfProcessors Memory index="SRV" sourcetype=WinHostMon source=operatingsystem TotalPhysicalMemoryKB="*" | dedup host | eval "TotalPhysicalMemory (GB)" = round (((TotalPhysicalMemoryKB)/1000000),1) | table host "TotalPhysicalMemory (GB)" ============================= My end goal is to provide a single table or report with the following columns Host, OS, Number of Processors, total physical memory, total storage  Thank you Dan
Hey all,  I am having a file that has the following stuff: #9 #10 #4 #1 .. #6 For everything that is not #9 or #10, I already made a replacement and it shows #other for #4/#6, etc.  But wh... See more...
Hey all,  I am having a file that has the following stuff: #9 #10 #4 #1 .. #6 For everything that is not #9 or #10, I already made a replacement and it shows #other for #4/#6, etc.  But when the statistics are shown, I am seeing the following order:  #10 #9 #other I want to have the following output:  #9 #10 #other.  The search string i am using here is not providing the desired output:  index = app_events_dbdetect_actimize_event_us_uat sourcetype = txndata Return_code_sent_to_SIL="#*" | eval Return_code_sent_to_SIL=if(Return_code_sent_to_SIL="#9" OR Return_code_sent_to_SIL="#10", Return_code_sent_to_SIL, "#other") | top limit=0 Return_code_sent_to_SIL | inputlookup append=true lookup_0_error_totals.csv | stats max(count) as "Total errors" by Return_code_sent_to_SIL | rename Return_code_sent_to_SIL as "#error" | eval sort_Return_code_sent_to_SIL=case("#error"="#9",1, "#error"="#10",2, "#error"="#other",4) | sort by sort_Return_code_sent_to_SIL What am I doing wrong?  Thanks!
How do I know my server OS can be installed with the Splunk DELL PowerEdge R740 Server OS version: Redhat linux | Redhat Enterprise Linux Server 6.10 (Santiago) | Kernel 2.6.32-754.18.2 |