All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have put together a python Splunk Modular Input that depends on python3 to execute and works just fine if I have python3 as the default server level python version to use (via server.conf - pyth... See more...
I have put together a python Splunk Modular Input that depends on python3 to execute and works just fine if I have python3 as the default server level python version to use (via server.conf - python.version = python3 ). I'm distributing this to various Splunk users and not all of them have this enabled yet. When I try and install this on one of those instances, the introspection fails with syntax errors (Python3 specific syntax) and I can not get the modular input to initialize. Is there a way to tell the introspection which python to use when initializing the modular input (within the script itself)? I have tried adding the above stanza and setting to defaults/inputs.conf with no luck. Thanks
Hello everyone, I have the attached file that is generated every night through my client's internal system and I need to index the information to collect metrics. I need these files to be index... See more...
Hello everyone, I have the attached file that is generated every night through my client's internal system and I need to index the information to collect metrics. I need these files to be indexed based on their date in the file name. Ex: The name of the file generated by the system is (qtd_ramal_diario_04042020.txt, qtd_ramal_diario_05042020.txt, etc.), so I need it to be indexed according to the time in the file name. I need to extract the information that is between ";" in separate fields with the names (Field1, Field2, Field3) respectively. Remembering that this file is variable, there are days that generate many lines and others do not. FIELD1 FIELD2 FIELD3 77111010; 8; 614 77111812; 1; 106 77115070; 1; 58 70666287; 4; 171 70662245; 12; 708 77196074; 23; 1439 Is there a way to do this with Splunk? Below is an example of the generated log: 78122960;2; 132 55002801;3; 279 8068256;8; 466 80661008;4; 134 55258888; 21;1843 76283160;1;25 55735555; 15;1027 55191240;1; 267 80662176;2; 249 790965034;3;93 55159608;1;20 80668021;1;19 76282680;2; 154 80664441;5; 536 71172794;1;28 55196157; 16;1208 55192425;3; 347 55196091;1;23 55192404;1;71 55196032; 24; 996 55196553;2;78 55196040;4;1087 55196426;1; 152 78111816;2; 157 78111847;1;30 78111815;6; 429 78111814;3; 233 55021902;2; 278 55034140;4; 159 550364331;1;80 550561127;2;78
Hi Splunkers, How do I calculate the logging frequency of my index=xxx sourcetype=yyy host=zzz? Explanation: I have a different set of logs which sends logs with different frequency, some of th... See more...
Hi Splunkers, How do I calculate the logging frequency of my index=xxx sourcetype=yyy host=zzz? Explanation: I have a different set of logs which sends logs with different frequency, some of them send every minute/hour and some of them send the logs once a day. so basically logging frequency is not fixed, it's dynamic. I'm trying to find out a way to alert if a particular index=xxx sourcetype=yyy host=zzz stops sending logs, I want a dynamic way of calculating the frequency threshold wherein I can say (now()-last_event_time) > threshold, I don't want to use something like which is basically find the difference between the last event time VS the current and some random threshold. I want Splunk to tell the ideal threshold for my index,sourcetype and host combination. For example; A particular logs from index=a host=b sourceype=c logs once in a day, so here I want Splunk to tell an ideal threshold I can use ( maybe ~ 1day or ~ 24 hours) as the threshold to set an alert. another example, a particular logs from index=g sourcetype=h host=i logs every 4.5/5 hours, so here I want Splunk to tell an ideal threshold I can use ( maybe ~ 4H or ~ 4.5h or 5h or xh) as the threshold to set an alert. so using this I can set an alert like (now()-last_event_time) > threshold Thanks in advance. Happy Splunking.
Hi as the title suggests, I am looking for some SPL that will show per index disk used by warm, cold, etc. buckets. Your input is greatly appreciated.
I'm trying to output a file in CSV format with maxresultrows in it, using curl. It works the way I want: $(curl -s -u user:pass -k https://localhost:8089/services/search/jobs/$id/results/?coun... See more...
I'm trying to output a file in CSV format with maxresultrows in it, using curl. It works the way I want: $(curl -s -u user:pass -k https://localhost:8089/services/search/jobs/$id/results/?count=0 --get -d output_mode=csv > ${file}) However, now I want a larger number of rows, so I add: maxresultrows=500000 But it outputs in xml instead of csv, event with output_mode=csv: $(curl -s -u user:pass -k https://localhost:8089/services/search/jobs/$id/results/?count=0 --get -d maxresultrows=500000 output_mode=csv > ${file}) What am I missing here? EDIT: One thing that I found was that without "maxresultrows" and "output_mode=csv", it outputs all result rows but in XML, it seems that the bottleneck is the output_mode, can't find why it just doesn't outputs all the result rows in CSV file. Please help!
Does anyone have any recommendations on how to tune the logs for the App for Palo Alto? I found the following Splunk Answers article that I've applied: https://answers.splunk.com/answers/74027... See more...
Does anyone have any recommendations on how to tune the logs for the App for Palo Alto? I found the following Splunk Answers article that I've applied: https://answers.splunk.com/answers/740273/how-to-filter-out-informational-logs-from-palo-alt.html But with this change, a few dashboards just don't display events. I'm thinking that maybe I need to apply some filtering to the different sourcetype instead of the global pan:log.
Hello can someone explain to me the difference between index summary and Cached search results ? thanks
Hello and Good day Splunkers. I have a bit of a vague question. Does anyone have any information or knowledge to share on Scripted Alerts in Splunk Cloud? Trying to go through the documentation to ... See more...
Hello and Good day Splunkers. I have a bit of a vague question. Does anyone have any information or knowledge to share on Scripted Alerts in Splunk Cloud? Trying to go through the documentation to find out what alert actions are possible and I'm having difficulty making sense of it. The reason I'm asking is another team has approached us to build a scripted alert for them. So when an alert is triggered it's not just sending an email but splunk also runs a script to download some files through a provided .sh script which appears to be a thread dump web service. The reason I'm asking so vaguely is to try and gauge what Alert Actions are possible and not possible in Splunk cloud. Any idea's are appreciated. Thank you for reading!
My goal is to put an iFrame widget in a Dashboard. I've tried this once before with the same code (afaik), which worked. This was about a month ago, but I did not save that Dashboard... The Edit... See more...
My goal is to put an iFrame widget in a Dashboard. I've tried this once before with the same code (afaik), which worked. This was about a month ago, but I did not save that Dashboard... The Edit-->Source mode does not give any errors, and I can press on SAVE. However, when I view the Dashboard after, the area where there is supposed to be Wikipedia is empty. The iFrame widget does not seem to load. When I put in there, it does work. Code I used is written below. I put wikipedia in there as an example. Since this website allows itself to be embedded. What am I forgetting? How do I make this work? <row> <panel> <title>TEST - iFrame in Dashboard</title> <html> <iframe src="https://www.wikipedia.org/" height="200px" width="100px"/> </html> </panel> </row>
Hello, Currently, we are using multiple datamodels for same data (post filters are different). Now we are trying to merge them in single datamodel using child dataset to decrease disk space but a... See more...
Hello, Currently, we are using multiple datamodels for same data (post filters are different). Now we are trying to merge them in single datamodel using child dataset to decrease disk space but appently there is massive performance impact on tstats search queries using child dataset. In below example, while using child dataset Its almost 300+% increase in search time. This might be expected as scaning full set of logs. Checking if there is any other way of using tstat query on child dataset to improve performace ? Datamodel structure
Dear Friends, Need you're help on writing a rex. As per my requirement. what ever value comes before a space need to be considered as first value and need to be extracted. values need to extracte... See more...
Dear Friends, Need you're help on writing a rex. As per my requirement. what ever value comes before a space need to be considered as first value and need to be extracted. values need to extracted and stored in field (checkme,DATA_ACE, FdTest@Labcatr, FG_sam, Check@#$values, D&*fuse,Data*now) Sample try (?P<Clu>DATA_ACE) Sample data: DATA_ACE_CHE_Team FdTest@Labcatr DATA_ACE_CADD_Team DATA_ACE_CAM_Team DATA_ACE DATA_ACE_CSS_Team FG_sam Check@#$values checkme Data D&*fuse Data*now Thanks in advance ...
Hi All, Does dbconnect support integration with Cyberark to manage users. if yes, is there a documentation on how to implement the same.
The lookup table files view only shows some basic info about the file. I often like to know the size and the age of the files. I've seen some pyhton additions to show that, but the lookup table... See more...
The lookup table files view only shows some basic info about the file. I often like to know the size and the age of the files. I've seen some pyhton additions to show that, but the lookup table file view seems to be the logical place to add such info. Can this be added in a future splunk release?
How would i find the average value of a certain field per a certain amount of events Example: i have 1000 events and in there i have a specific numerical field. what would i do if i wanted an av... See more...
How would i find the average value of a certain field per a certain amount of events Example: i have 1000 events and in there i have a specific numerical field. what would i do if i wanted an average of every 10 events and wanted to display them in a new table. so my new table will have 100 events now each entry filled with the average of 10 events
I am trying to create an email notification for users which have a password expiring soon. I then want to include the number of days until it expires. I have the below search. It returns the users ... See more...
I am trying to create an email notification for users which have a password expiring soon. I then want to include the number of days until it expires. I have the below search. It returns the users which are due to expire. It then emails every user which it returns which has an email address assigned to it in AD. I cannot seem to find a way to include how many days until it expires. | ldapsearch domain=default search="(&(objectclass=user)(!(objectclass=computer)))" attrs="displayName, whenCreated, pwdLastSet, mail" | dedup displayName | eval DateLastChanged=strptime(pwdLastSet, "%Y-%m-%dT%H:%M:%S.%N") | where DateLastChanged < relative_time(now(),"-69d@d") | table DateLastChanged, displayName, mail | outputcsv TempEmailFile.csv | stats values(mail) AS email, values(DateLastChanged) as DateLastChanged | mvexpand email | map search="|inputcsv TempEmailFile.csv | where mail=\"$email$\" | fields - mail | sendemail sendresults=false inline=true server="0.0.0.0" from="senderemail" to=\"$email$\" subject=\"Password Reminder\" message=\"Your password is due to expire in ??? Days.\"" If anyone can think of a way to make it better to make it include the days until it expires, i would be very grateful
how can I display threshold values when hover over single value visualization in dashboard ? I have referred to answers and figured out the below css . how can I implement the effect to single val... See more...
how can I display threshold values when hover over single value visualization in dashboard ? I have referred to answers and figured out the below css . how can I implement the effect to single value visualization panel ? <panel> <html> <style> .custom-tooltip{ display: inline; position: absolute; } .custom-tooltip:hover:after{ background: #333 ; background: rgba(0,0,0,.8) ; border-radius: 5px ; bottom: 26px ; color: #fff ; content: attr(title) ; left: 20% ; padding: 5px 15px ; position: absolute ; z-index: 98; width: 220px; } .custom-tooltip:hover:before{ border: solid; border-color: #333 transparent; border-width: 6px 6px 0 6px; bottom: 20px ; content: "" ; left: 50% ; position: absolute ; z-index: 99 ; } </style> <a title="Splunk" class="custom-tooltip">Splunk is great</a> </html> </panel> <panel> <single> <title>testing</title> <search> <query>|makeresults |eval a ="100"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </single> </panel>
I am trying to get exactly 10 digits which might be between white spaces or symbols etc: 1234567890 ,234567890 , 1234567890 :1234567890 etc etc but not 10 digits from a string of 11+ di... See more...
I am trying to get exactly 10 digits which might be between white spaces or symbols etc: 1234567890 ,234567890 , 1234567890 :1234567890 etc etc but not 10 digits from a string of 11+ digits etc There are no unique digits within these 10 digit ID's I am trying to identify. I am just trying to get as close as possible without generating to many false positives TIA TIA
Hi, I want to setup an alert whenever SMTP server is stuck/hung i.e. when smtp is not sending alerts. My SMTP server gets stuck certain time and wont be sending any emails until we restart the ser... See more...
Hi, I want to setup an alert whenever SMTP server is stuck/hung i.e. when smtp is not sending alerts. My SMTP server gets stuck certain time and wont be sending any emails until we restart the server. so I want to setup an alert in splunk to notify me when the SMTP server is stuck. P.S when the SMTP server is stuck no logs are generated so there is no such specific error messages. so let me know if there is any way to setup an alert. thanks:)
Hi everyone! I have the following single value with trend: <single> <search> <query>type"something" abc="$xyz$" res!=220 | timechart sum(field1) as Count span=7d... See more...
Hi everyone! I have the following single value with trend: <single> <search> <query>type"something" abc="$xyz$" res!=220 | timechart sum(field1) as Count span=7d </query> <earliest>-7d@d</earliest> <latest>now</latest> </search> <option name="trendInterval">-7d</option> <option name="colorBy">trend</option> <option name="useColors">1</option> <option name="showSparkline">1</option> <option name="underLabel">Compared to a week before</option> <option name="drilldown">all</option> </single> Now what i wanna have is the comparison between the accumulative results of the current week to the previous week. However the query compares only the results of the current day to the previous week instead of the sum of the results received in the current week, up to the current day (Example: Let's assume today is Wednesday and i want to compare the sum of the results of Monday, Tuesday and Wednesday combined, to the week before, not just Wednesday. What i mean by "current week" can also be described as the "selected week" regarding to the time token value) Thanks in advance!
Hi Splunk Support Team, We have utilized the vmware app add-on Splunkbase Splunk_TA_esxilogs and just want to understand the reason behind why verbose/trivia logs are set to NullQueues in trans... See more...
Hi Splunk Support Team, We have utilized the vmware app add-on Splunkbase Splunk_TA_esxilogs and just want to understand the reason behind why verbose/trivia logs are set to NullQueues in transforms.conf, is this a default configuration for this TA? What is the possible risk if these logs won't put to NullQueues. #NullQueues [vmware_generic_level_null] DEST_KEY = queue FORMAT = nullQueue REGEX = (?:verbose|trivia)[:\s] [vmware_generic_level_null_4x] DEST_KEY = queue FORMAT = nullQueue REGEX = ^\w+\s+\d+\s+[\d:]{8}\s+[^ ]+\s+\w+\s+\d+\s+[\d:]{8}\s.(?:verbose|trivia).