All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

From splunkd.log Traceback (most recent call last): 04-29-2020 10:15:14.055 -0500 ERROR sendmodalert - action=sendresults_alert STDERR - File "C:\Program Files\Splunk\etc\apps\sendresults\bin... See more...
From splunkd.log Traceback (most recent call last): 04-29-2020 10:15:14.055 -0500 ERROR sendmodalert - action=sendresults_alert STDERR - File "C:\Program Files\Splunk\etc\apps\sendresults\bin\sendresults_alert.py", line 206, in <module> 04-29-2020 10:15:14.055 -0500 ERROR sendmodalert - action=sendresults_alert STDERR - with gzip.open(payload.get('results_file'),'rt') as fin: 04-29-2020 10:15:14.055 -0500 ERROR sendmodalert - action=sendresults_alert STDERR - File "C:\Program Files\Splunk\Python-2.7\lib\gzip.py", line 34, in open 04-29-2020 10:15:14.056 -0500 ERROR sendmodalert - action=sendresults_alert STDERR - return GzipFile(filename, mode, compresslevel) 04-29-2020 10:15:14.057 -0500 ERROR sendmodalert - action=sendresults_alert STDERR - File "C:\Program Files\Splunk\Python-2.7\lib\gzip.py", line 94, in __init__ 04-29-2020 10:15:14.057 -0500 ERROR sendmodalert - action=sendresults_alert STDERR - fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') 04-29-2020 10:15:14.057 -0500 ERROR sendmodalert - action=sendresults_alert STDERR - ValueError: Invalid mode ('rtb') 04-29-2020 10:15:14.613 -0500 INFO sendmodalert - action=sendresults_alert - Alert action script completed in duration=1632 ms with exit code=1 04-29-2020 10:15:14.613 -0500 WARN sendmodalert - action=sendresults_alert - Alert action script returned error code=1 04-29-2020 10:15:14.613 -0500 ERROR sendmodalert - Error in 'sendalert' command: Alert script returned error code 1. sendresults.log didn't have anything but this. Doesn't appear in the logs until after the upgrade and the errors occur 2020-05-04 11:40:43,437 INFO invocation_id=123456789.12:1234invocation_type="action" py_version=sys.version_info(major=2, minor=7, micro=17, releaselevel='final', serial=0) Rolled back to 4.0.1, working again. Splunk is on 8.0.2.
Team, I'm getting the following error when trying to add an input after configuring the first step of the app. Have confirmed I can cURL and authenticate from the same linux box using OAuth on ... See more...
Team, I'm getting the following error when trying to add an input after configuring the first step of the app. Have confirmed I can cURL and authenticate from the same linux box using OAuth on Strava's API but can't seem to get any further with this app. Believe initial error on 'get_customized_setting' points to some python scripts which could be here: ./bin/ta_strava_for_splunk/aob_py2/splunk_aoblib/setup_util.py ./bin/ta_strava_for_splunk/aob_py2/modinput_wrapper/base_modinput.py ./bin/ta_strava_for_splunk/aob_py3/splunk_aoblib/setup_util.py ./bin/ta_strava_for_splunk/aob_py3/splunk_aoblib/pycache/setup_util.cpython-37.pyc ./bin/ta_strava_for_splunk/aob_py3/modinput_wrapper/base_modinput.py ./bin/ta_strava_for_splunk/aob_py3/modinput_wrapper/pycache/base_modinput.cpython-37.pyc Logging only gives me: 2020-05-04 16:19:37,677 ERROR pid=14386 tid=MainThread file=base_modinput.py:log_error:309 | error? Any help gratefully appreciated!
Good afternoon    I know that there is official information regarding the maximum number of concurrent searches, scheduled searches, according to the number of CPUs and servers that the cluster ha... See more...
Good afternoon    I know that there is official information regarding the maximum number of concurrent searches, scheduled searches, according to the number of CPUs and servers that the cluster has.    Could someone help clarify these values for me, if I currently have 6 indexer with 36 cores each and 6 search head with 28 physical cores.   I know that apparently the values for the scheduled searches would take 50% of these values.   Your support is appreciated.
Hi i am new to Splunk/JavaScript, Need your help for reducing my code, i have created two class for 2 fields, likewise we have 50 fields in splunk table. class is created for changing font and back... See more...
Hi i am new to Splunk/JavaScript, Need your help for reducing my code, i have created two class for 2 fields, likewise we have 50 fields in splunk table. class is created for changing font and background color based on condition. With this current approach i need to add/repeat 50 times (i.e.) for all the fields. Each time i need to add/call setTimeout(function() how can i customize the redundancy of the code for creating class and calling the setTimeout function for single time in javascript. kindly find my code enter code here table.js require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc, TableView) { // Row Coloring Example with custom, client-side range interpretation var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { // Enable this custom cell renderer for both the active_hist_searches and the active_realtime_searches field //return _(["LOgType"]).contains(cell.field); return true; }, render: function($td, cell) { // Add a class to the cell based on the returned value var value = cell.value; // Apply interpretation for string of autorización if (cell.field === "LOgType") { if (value == "error") { $td.addClass('range-cell').addClass('range-severe'); } } if (cell.field === "ID") { if(value!==null) { $td.addClass('range-con').addClass('range-low'); } if (cell.field === "Desc") { if(value!==null) { $td.addClass('range-cos').addClass('range-high'); } } // Update the cell content $td.text(value).addClass("string"); } }); mvc.Components.get('Table1').getVisualization(function(tableView) { tableView.on('rendered', function() { setTimeout(function(){ // Apply class of the cells to the parent row in order to color the whole row tableView.$el.find('td.range-cell').each(function() { $(this).parents('tr').addClass(this.className); }); },100); setTimeout(function(){ tableView.$el.find('td.range-con').each(function() { $(this).parents('tr').addClass(this.className); }); },100); }); setTimeout(function(){ tableView.$el.find('td.range-cos').each(function() { $(this).parents('tr').addClass(this.className); }); },100); }); `tableView.addCellRenderer(new CustomRangeRenderer()); // Add custom cell renderer, the table will re-render automatically. }); }); table.css #Table1 tr.range-severe td{ color: red; } #Table1 tr td.range-low{ background-color: #FFC597 !important; } #Table1 tr td.range-high{ background-color: #FFC597 !important; }
Hi, I am using Splunk to parse a particular sets of logs since many years but recently i have started facing some issue. Very few of the events are getting merged instead of parsing as a separate ... See more...
Hi, I am using Splunk to parse a particular sets of logs since many years but recently i have started facing some issue. Very few of the events are getting merged instead of parsing as a separate event. Consider below example 2020-05-04 16:45:47,122 [ INFO] [CMEPS_JMSMessengerInject-EnterpriseMessageListener-186] - s_proc_id=921844e5-8130-4f29-9418-5622d95dfeef s_comp_id=ARCHIVER s_seq_no=9 s_proc_dur=372 s_proc_outcome=success 2020-05-04 16:45:48,124 [ INFO] [CMEPS_JMSMessengerInject-EnterpriseMessageListener-186] These two events should be segregated and should not be merged under any condition. Could someone help me to provide correct props.conf for this sourcetype so that Splunk only starts event with this timestamp pattern ex(2020-05-04 16:45:47) Regards, Devang
We build our own app that only works in Python 3. I would like to know how to force Splunk to use python 3 for this app. We don't want to ask users to change the system wide setting in server.conf. ... See more...
We build our own app that only works in Python 3. I would like to know how to force Splunk to use python 3 for this app. We don't want to ask users to change the system wide setting in server.conf. The app adds a new input form to Splunk under local inputs. Below is the error i`m getting. "Unable to initialize modular input "appname" defined in the app "appname": Introspecting scheme=appApi: script running failed (exited with code 1).." I tried adding python.version = python3 in inputs.conf, app.conf and props.conf but that does not fix it. Any idea how to fix this? or can it only be fixed by making the script compatible with python2?
Let's say I have a CSV with the following spanning 10 years: Date | Time | Value 2020-05-01 4:00:00 PM 49.88 If I try to do a timechart it works fine for the last several years but if I sel... See more...
Let's say I have a CSV with the following spanning 10 years: Date | Time | Value 2020-05-01 4:00:00 PM 49.88 If I try to do a timechart it works fine for the last several years but if I select All Time then it incorrectly parses the timestamp and groups multiple days worth of values in a single day: _time | values(Close) 2014-11-12 | 1.86 1.87 1.88 1.92 If I view the events, the parsed timestamp is incorrect now, but only for really old events: Time (Splunk parsed): 11/12/14 4:00:00.000 PM Full Event: 2010-05-04,4:00:00 PM,8.68,46458590,9.08,9.08,8.54 Time (Splunk parsed): 11/12/14 4:00:00.000 PM Full Event: 2010-05-26,4:00:00 PM,8.22,37479000,8.39,8.59,8.18 I did this with the built-in CSV sourcetype as well as custom. Thanks for any help! EDIT: Here's an example. Download the Max dataset from here: https://www.nasdaq.com/market-activity/stocks/amd/historical Note it doesn't have the timestamp, so a new column was added with 16:00:00 (end of market close) called Time. I used the default CSV sourcetype as a test and same issue. Test search (All time): source="filename.csv" index="test" | timechart values("Close/Last") span=1d Around 2014 starts mis-parsing (Statistics tab -> click on date -> view events -> _time is different than the event date).
We have some archived frozen buckets that are named "indexname-yyyy-mm-dd-hh-min" instead of the db_endtime_starttime_guid format. When we try to do the rebuild on these we get an error "fsck - Const... See more...
We have some archived frozen buckets that are named "indexname-yyyy-mm-dd-hh-min" instead of the db_endtime_starttime_guid format. When we try to do the rebuild on these we get an error "fsck - Constraints given leave no buckets to operate on". Is this due to the odd naming of the buckets? They were archived using the ColdToFrozen.py script supplied with splunk but altered by one of our admins to write the buckets out with the new naming convention. Is our data unthawable? Is there a command we can run to extract the correct information so we can rename the directory appropriately?
Hello good people of the Splunk Community. This one's got me foxed. I noticed this morning that the splunkd logs on my Raspberry Pi-hosted Universal Forwarder are rotating really quickly (check ou... See more...
Hello good people of the Splunk Community. This one's got me foxed. I noticed this morning that the splunkd logs on my Raspberry Pi-hosted Universal Forwarder are rotating really quickly (check out the timestamps below - it is literally creating a log entry as fast as the CPU will spin) and I've got no idea why. Oddly, the error appears to originate in Splunk's own log at: /opt/splunkforwarder/var/log/splunk/splunkd.log At first I thought the error must have been introduced from a parsed log, but then I realised two odd things - firstly, the splunkd errors I'm seeing reference the log itself as the source of the problem and secondly it appears to take issue with the number '5' (in it's own log)? Removing the logs and restarting the forwarder doesn't help, rebooting the RPi doesn't help. As soon as the splunkd service starts, it immediately spams the splunkd.log with this. Anyone any ideas what I'm missing? Here's what it looks like: 05-04-2020 15:48:24.117 +0100 ERROR JsonLineBreaker - JSON StreamId:14919777892573414995 had parsing error:Unexpected character: '5' - data_source="/opt/splunkforwarder/var/log/splunk/splunkd.log", data_host="rpi3", data_sourcetype="json" 05-04-2020 15:48:24.117 +0100 ERROR JsonLineBreaker - JSON StreamId:14919777892573414995 had parsing error:Unexpected character: '5' - data_source="/opt/splunkforwarder/var/log/splunk/splunkd.log", data_host="rpi3", data_sourcetype="json" 05-04-2020 15:48:24.117 +0100 ERROR JsonLineBreaker - JSON StreamId:14919777892573414995 had parsing error:Unexpected character: '5' - data_source="/opt/splunkforwarder/var/log/splunk/splunkd.log", data_host="rpi3", data_sourcetype="json" 05-04-2020 15:48:24.117 +0100 ERROR JsonLineBreaker - JSON StreamId:14919777892573414995 had parsing error:Unexpected character: '5' - data_source="/opt/splunkforwarder/var/log/splunk/splunkd.log", data_host="rpi3", data_sourcetype="json" 05-04-2020 15:48:24.117 +0100 ERROR JsonLineBreaker - JSON StreamId:14919777892573414995 had parsing error:Unexpected character: '5' - data_source="/opt/splunkforwarder/var/log/splunk/splunkd.log", data_host="rpi3", data_sourcetype="json" 05-04-2020 15:48:24.117 +0100 ERROR JsonLineBreaker - JSON StreamId:14919777892573414995 had parsing error:Unexpected character: '5' - data_source="/opt/splunkforwarder/var/log/splunk/splunkd.log", data_host="rpi3", data_sourcetype="json" 05-04-2020 15:48:24.117 +0100 ERROR JsonLineBreaker - JSON StreamId:14919777892573414995 had parsing error:Unexpected character: '5' - data_source="/opt/splunkforwarder/var/log/splunk/splunkd.log", data_host="rpi3", data_sourcetype="json" 05-04-2020 15:48:24.118 +0100 ERROR JsonLineBreaker - JSON StreamId:14919777892573414995 had parsing error:Unexpected character: '5' - data_source="/opt/splunkforwarder/var/log/splunk/splunkd.log", data_host="rpi3", data_sourcetype="json" 05-04-2020 15:48:24.118 +0100 ERROR JsonLineBreaker - JSON StreamId:14919777892573414995 had parsing error:Unexpected character: '5' - data_source="/opt/splunkforwarder/var/log/splunk/splunkd.log", data_host="rpi3", data_sourcetype="json" 05-04-2020 15:48:24.118 +0100 ERROR JsonLineBreaker - JSON StreamId:14919777892573414995 had parsing error:Unexpected character: '5' - data_source="/opt/splunkforwarder/var/log/splunk/splunkd.log", data_host="rpi3", data_sourcetype="json" ... and lots more ... ... and more ... ... more ..
Hi, I am trying to get input from a powershell script. It drives me up the walls. I already have other PS scripts running just fine, so this really puzzles me. I have 3 heavy forwarder on Splun... See more...
Hi, I am trying to get input from a powershell script. It drives me up the walls. I already have other PS scripts running just fine, so this really puzzles me. I have 3 heavy forwarder on Splunk 8.0.2.1 and 18 universal forwarders on Splunk 7.2.4. When using this inputs.conf setting: [powershell://df] script = Get-WmiObject Win32_LogicalDisk | Select-Object DeviceID,Size,FreeSpace | findstr.exe '[0-9]$' index = os_monitoring schedule=*/5 * * * * source=df-win sourcetype=os:monitoring:diskspace disabled = 0 I get only input on 3 UF hosts and 2 HF hosts. One of the HF hosts delivers the following in the _audit log, but no output. 05-04-2020 16:35:00.0014151+2 INFO enqueue job for stanza=df 05-04-2020 16:35:00.0014151+2 INFO Start executing script=Get-WmiObject Win32_LogicalDisk | Select-Object DeviceID,Size,FreeSpace | findstr.exe '[0-9]$' for stanza=df 05-04-2020 16:35:00.0170289+2 INFO End of executing script=Get-WmiObject Win32_LogicalDisk | Select-Object DeviceID,Size,FreeSpace | findstr.exe '[0-9]$' for stanza=df, execution_time=0.0156138 seconds The other boxes do not deliver anything in terms of output or errors, I just see that the app is deployed. When switching to a real script like in the following script = . "$SplunkHome\etc\apps\FA-windows-diskspace\bin\scripts\df.ps1" I again get the the same result. The majority of systems do not deliver output and I see no errors in the _* indices. I am a bit lost. I would expect all machines to fail or none, but not this inconsistent behaviour. Any ideas? thx afx
Please confirm if Network tool kit ping can ping only 10 servers? I can see only 10 results
Hi there, I try to delete old SAML users on a SHCluster with Splunk 7.1.4. I followed instructions here https://answers.splunk.com/answers/525555/how-do-i-remove-old-saml-users.html but I still ha... See more...
Hi there, I try to delete old SAML users on a SHCluster with Splunk 7.1.4. I followed instructions here https://answers.splunk.com/answers/525555/how-do-i-remove-old-saml-users.html but I still have these users in the access control > users page. More surprisingly, if I request curl -k -u admin:{password} --request GET https://{searchhead}:8089/services/admin/SAML-user-role-map/{user} I have a positive answer (user found) But if I request curl -k -u admin{password} --request DELETE https://{searchhead}:8089/services/admin/SAML-user-role-map/{user} It says : In handler 'SAML-user-role-map': Does not exist: /nobody/system/authentication/userToRoleMap_SAML/{user} These users are not in /etc/users folder nor in authentication.conf file I also tried with authentication/users method. I tried to complete with debug/refresh and restart the SHCluster without the expected result. Any idea ? Regards, Francois
Hi All, I'm a new Splunk admin working inside of a pretty large Splunk Cloud environment. Historically, the folks on the admin/engineering team have defined custom sourcetypes inside of a custom a... See more...
Hi All, I'm a new Splunk admin working inside of a pretty large Splunk Cloud environment. Historically, the folks on the admin/engineering team have defined custom sourcetypes inside of a custom application that is installed on our SH's and indexers. They have created the sourcetype by adding a stanza in the props.conf and configuring the flags within the stanza. I'm all for best practices and I wanted to see if the Splunk community could weigh in and point me in the right direction of how I should be creating new custom sourcetypes. Would the best way be to create the sourcetype in the GUI? If so, which app should I be saving the sourcetype in? I could continue using the current process, although this process requires a rolling restart of our indexers and SH, which causes an outage during each update to the custom app. If there is any other information I should be including, please let me know. Thank you
Ubuntu 18.04.4 LTS trying to install Python for Scientific Computing (for Linux 64-bit). I've seen other similar questions but no answers. this is Splunk Enterprise Version: 8.0.2.1 Build: f0... See more...
Ubuntu 18.04.4 LTS trying to install Python for Scientific Computing (for Linux 64-bit). I've seen other similar questions but no answers. this is Splunk Enterprise Version: 8.0.2.1 Build: f002026bad55
Hey I'm trying to extract the values from _time to new fields (Year, Month, Day), in order to compare average of events during current month to last 3 months, but it seems like they do not get any ... See more...
Hey I'm trying to extract the values from _time to new fields (Year, Month, Day), in order to compare average of events during current month to last 3 months, but it seems like they do not get any value. here is my search: 'soc_events' | search * Rule_Name="*" | eval mytime=strftime(_time, "%Y/%m/%d") | rex field=mytime "(\"?<Year>\d+)/(?<Month\d+)/(?<Day>\d+)\"" | stats count as Count by Year,Month,Day | sort Year,Month,Day | eventstats last(Month) as Current_Month last(Year) as Current_Year | where Month!=CurrentMonth OR Year!=Current_Year | stats avg(Count) as DayAveravge values(Month) as Months by Day
I am looking for a query that will help me monitor hidden file and folder creations on Linux/Win boxes. Can the community point me in the right direction ?
Hi, I have a search that returns "os" field and using map I used to ping all host returned from the search using ping command.Is there a way to include os field into the ping result index?
I'm trying to use the REST API to update a large number of alerts/saved searches across multiple environments. Specifically, I want to update to ensure that CSV's are attached to emails. I'm testing ... See more...
I'm trying to use the REST API to update a large number of alerts/saved searches across multiple environments. Specifically, I want to update to ensure that CSV's are attached to emails. I'm testing this in a lab on Splunk 8.0.2. I've tried using both of these: 'curl -u splunk_user:splunk_pass -X POST -sk "https://splunk_ip:8089/services/saved/searches/(search title URL encoded)" -d action.email.sendcsv=1' 'curl -u splunk_user:splunk_pass -X POST -sk "https://splunk_ip:8089/servicesNS/(owner)/(app)/saved/searches/(search title URL encoded)" -d action.email.sendcsv=1' However, what seems to be happening is that the alerts are being cloned into a report instead of being updated themselves. The name is identical, and the search is being copied over despite not being in the curl request. Not sure what is going wrong with these API calls. They look correct by my reading of the API documentation, but I may be overlooking something.
Hello everyone, I have Splunk Universal Forwarder running on a server watching a few files for changes. Log data is inserted at the end of the files every 5 minutes or so. Up until a few days ag... See more...
Hello everyone, I have Splunk Universal Forwarder running on a server watching a few files for changes. Log data is inserted at the end of the files every 5 minutes or so. Up until a few days ago, all files were working and being correctly monitored. Today i noticed that a single file out of 10+ is not being monitored correctly. When the script appends something to the file and closes it (thus updating the update date), the data doesn't arrive at the index. However, if i open the file, change anything and save it, all the data that should have arrived suddenly arrives. This problem started out of the blue. I tried restarting the universal forwarder service, changing how the file is saved, deleting the file and letting the script re-create it, everything, but it still won't work. Any ideas? Has this ever happened to anyone before? P.S.: The file is open and closed explictly in my script. All other files do the same thing and work, only this one file is giving me trouble. Thanks!
I am using the Universal Forwarder to collect information on a Java Process. When monitoring "% Processor Time" for a specific process, I noticed a discrepancy between the results from Performance Mo... See more...
I am using the Universal Forwarder to collect information on a Java Process. When monitoring "% Processor Time" for a specific process, I noticed a discrepancy between the results from Performance Monitor vs. Resource Monitor for this process when pulling the information using default values (below is my input.conf value): [perfmon://Process] object = Process counters = % Processor Time;Working Set;Working Set Peak; instances = _Total;java;javaw interval = 10 It seems to be corrected when I scale % Processor Time down to 10% of the default value when building my splunk search (shown below): host="" (counter="% Processor Time" AND instance=java OR instance=javaw) OR (collection="CPU Load" AND instance=_Total) | timechart span=20s avg(Value) AS "CPU Utilization" by instance | eval java=java/10 | eval javaw=javaw/10 | rename java as "Appserver CPU" javaw as "Client CPU" VALUE_Total as "Total Machine CPU" I thought this fixed the issue until we actually get a spike in CPU and I noticed the java and javaw values now max out at 10% and wont go any higher. I know it is supposed to max out at 100% and I scaled it down to 10% of the normal value, that makes sense to me but if it really is maxing out there, why can't I get the real values sent to the indexer from perfmon in the first place. Any help would be much appreciated. Thanks!