All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Have tried to setup HTTPEventCollector via cli using splunk documentation link: https://docs.splunk.com/Documentation/Splunk/8.0.3/Data/UseHECfromtheCLI Commands i have executed are as below: ... See more...
Have tried to setup HTTPEventCollector via cli using splunk documentation link: https://docs.splunk.com/Documentation/Splunk/8.0.3/Data/UseHECfromtheCLI Commands i have executed are as below: /opt/splunk/bin/splunk http-event-collector create sdapp01 -uri https://localhost:8089 -description "this is a new token" -disabled 1 /opt/splunk/bin/splunk http-event-collector enable -name sdapp01 -uri https://localhost:8089 -auth admin:changeme curl -k -u admin:changeme https://localhost:8089/servicesNS/admin/splunk_httpinput/data/inputs/http splunk http-event-collector send -uri https://localhost:8089 -token 206f9ca0-24bd-48fd-95e8-dfdcaa17657a {"this is some data"} curl -k https://localhost:8089/services/collector -H 'Authorization: Splunk 206f9ca0-24bd-48fd-95e8-dfdcaa17657a' -d '{"sourcetype": "demo", "event":"Hello, world!"}' while sending data am getting error as below: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN">call not properly authenticated</msg> </messages> </response> Config details are as mentioned below: local/inputs.conf [http://sdapp01] disabled = 0 token = 206f9ca0-24bd-48fd-95e8-dfdcaa17657a default/inputs.conf [http] disabled=1 port=8088 enableSSL=1 dedicatedIoThreads=2 maxThreads = 0 maxSockets = 0 useDeploymentServer=0 # ssl settings are similar to mgmt server sslVersions=*,-ssl2 allowSslCompression=true allowSslRenegotiation=true Not sure what have i missed. Token is enabled, not expired. Have tried creating multiple tokens but stuck with same issue. Can someone please help.
eventtype=osquery_osquery name="pack_incident_response_*" earliest=-5m | fieldsummary output: A table contains multiple columns such as field, count, distinct_count, is_exact, .......etc. R... See more...
eventtype=osquery_osquery name="pack_incident_response_*" earliest=-5m | fieldsummary output: A table contains multiple columns such as field, count, distinct_count, is_exact, .......etc. Required output: only one column. Not working : |table -count, -distinct_count,
I finally figured out how to use the entity rest API object to pull my informational values.  Only problem is I can't figure out how to dynamically assign key value pairs to table from multivalue jso... See more...
I finally figured out how to use the entity rest API object to pull my informational values.  Only problem is I can't figure out how to dynamically assign key value pairs to table from multivalue json arrays.  This is as far as I have gotten and would greatly appreciate your eyes and time: | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity fields="_key,title,identifier,informational,identifying_name" report_as=text | eval value=spath(value,"{}") | mvexpand value | eval entity_title=spath(value, "title"), entity_name=spath(value, "identifying_name"), entity_aliases=mvzip(spath(value, "identifier.fields{}"),spath(value, "identifier.values{}"),"="), entity_info=mvzip(spath(value, "informational.fields{}"),spath(value, "informational.values{}"),"="), cpu_cores=spath(value, "informational.values{0}") The second to last eval brings the informational values I want together, but as a multi-value field.  The last eval is a straight set my value to this array value eval. I would like to attempt to do that without having to hard set it for every value.  Not every entity is configured the same (I am not above clearing out my 16,000 entities and forcing standard entry by users). Dynamically creating the key value pairs as single value fields would be much more palatable instead.   I appreciate any help at all.
I just want to get the left cluster (only Table A )as below picture. How should Splunk search be? tu.
Hi, Can an Appdynamics customer write a blog telling about an innovation or an awesome use case they solved using AppDynamics? If yes, how?
Example of search in nav bar: I only want the Search to be viewable by admins. I have looked at other Splunk questions: https://answers.splunk.com/answers/8037/can-an-app-use-multiple-navigati... See more...
Example of search in nav bar: I only want the Search to be viewable by admins. I have looked at other Splunk questions: https://answers.splunk.com/answers/8037/can-an-app-use-multiple-navigation-menus.html , https://answers.splunk.com/answers/43845/navigation-menu.html , etc. Is there an easier/less invasive way to achieve this functionality? Given that Splunk searching is not a personally customized dashboard, but rather it is built into Splunk, I am not certain the above solutions would work. Furthermore, I am not trying to disallow user from entering the URL: /en-GB/app//search, but rather I just do not want users with non-admin privileges to be able to view/click to access said URL and use Splunk searching. Any guidance would be much appreciated...
I am trying to combine 2 searches into one. However, the results for the 2nd search should only return if there are results in the first one. If no results in the first query, then nothing should... See more...
I am trying to combine 2 searches into one. However, the results for the 2nd search should only return if there are results in the first one. If no results in the first query, then nothing should be returned. If there are results from the first one, then display the returns from the 2nd search First query: index=batch | where jobName="test_job1" AND statusText in ("FAILURE", "SUCCESS") Second query: index=batch | where like(jobName,"test%") AND statusText="FAILURE" | stats earliest(statusText) as First_Failure, earliest(timestamp) as First_Failure_Time by jobName, machine | join jobName [search index=batch | where like(jobName,"test%") | stats latest(statusText) as Latest_Status, latest(timestamp) as Last_Updated_time by jobName] | sort by Last_Updated_time desc Any help would be appreciated. Thanks
I am trying to create an alert when 90% of RAM is used. I have determined that Committed bytes is the amount of RAM in use but I don't know how to get total RAM. I am NOT trying to find memory ... See more...
I am trying to create an alert when 90% of RAM is used. I have determined that Committed bytes is the amount of RAM in use but I don't know how to get total RAM. I am NOT trying to find memory but a percentage of physical ram in use. Thanks
Hello, I am trying to upload a .csv file and I am getting three errors messagen in my internal logs " -0400 ERROR TailReader - error from read call from "WARN FilesystemChangeWatcher - ... See more...
Hello, I am trying to upload a .csv file and I am getting three errors messagen in my internal logs " -0400 ERROR TailReader - error from read call from "WARN FilesystemChangeWatcher - error getting attributes of path "D:\Dados\SKY\Compartilhado\TECNOL~1\MONITO~1\TransUnion2Splunk\O0055TRANSUNION_COLETAFINALDESEMANA_30042020.csv": Access is denied." " WARN FileClassifierManager - Unable to open 'D:\Dados\SKY\Compartilhado\TECNOL~1\MONITO~1\TransUnion2Splunk\O0055TRANSUNION_COLETAFINALDESEMANA_TESTE.CSV'" ERROR TailReader - error from read call from 'D:\Dados\SKY\Compartilhado\TECNOL~1\MONITO~1\TransUnion2Splunk\O0055TRANSUNION_COLETAFINALDESEMANA_TESTE.CSV'. And the the file is not uploading into Splunk. I checked the permision, it's correct. The SPLUNK UF it's executing System Account Can you please help me figure out why I am getting this error and my file is not getting indexed? My inputs.conf [monitor://D:\Dados\SKY\Compartilhado\TECNOL~1\MONITO~1\TransUnion2Splunk\O0055TRANSUNION_COLETAFINALDESEMANA_*.CSV] _TCP_ROUTING = * index=INDEX source=INDEXXX:XXX sourcetype=INDEXXX:XXX disabled = 0 time_before_close = 60 multiline_event_extra_waittime = true initCrcLength = 512
Auditing has already been enabled but we are having issues to know who changed the permissions
Here is my query (time range is YTD): (splunk_server=indexer* index=wsi_tax_summary sourcetype=stash capability=109* tax_year=2019 ein=* intuit_offeringid=* partnerId!=*test* partnerId=*) | tim... See more...
Here is my query (time range is YTD): (splunk_server=indexer* index=wsi_tax_summary sourcetype=stash capability=109* tax_year=2019 ein=* intuit_offeringid=* partnerId!=*test* partnerId=*) | timechart span=1d dc(intuit_tid) as 19attempts | streamstats sum(19attempts) as 19attempts | eval time=strftime(_time,"%m-%d") | join type=left time [ inputlookup TY18_Splunk_total_data.csv | where capability="109X" | stats sum(attempts) as 18attempts by _time | streamstats sum(18attempts) as 18attempts | eval time=strftime(strptime(_time,"%m/%d/%Y"), "%m-%d") | fields time 18attempts] | fields time 19attempts 18attempts | rename 19attempts as "TY19" | rename 18attempts as "TY18" I understand a left join to mean that if the results from my subsearch don't match with the main search, it won't be included. If I run the query above, I get data in TY18 column from 01-02 thru 01-09 (below). I didn't expect data against those dates, so I copied the subsearch and ran it in a separate search window, and I can see (as I expected) there's no data from 01-02 thru 01-09 (below). Am I not understanding something about join type? What's happening here?
We connecting Solace queue from JMS_TA app to get queue data into Splunk. We successfully established connection and we able to get data from queue but after some time we getting below error. ERRO... See more...
We connecting Solace queue from JMS_TA app to get queue data into Splunk. We successfully established connection and we able to get data from queue but after some time we getting below error. ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/jms_ta/bin/jms.py" Caused by: ((Client name: XXXXXXXXX/6828/#XXXXXX Local addr: XXXXXXX:XXXXXXX Remote addr: syseai2.XXXX.com:55555) - ) com.solacesystems.jcsmp.JCSMPErrorResponseException: 503: Max Client Queue and Topic Endpoint Flow Exceeded.
Name :Test "extensionData": { "entries": [ { "key": "machinesTotal", ... See more...
Name :Test "extensionData": { "entries": [ { "key": "machinesTotal", "value": { "type": "integer", "value": 7 } }, { "key": "endpoint", "value": { "type": "string", "value": "vcenter.local" } }, { "key": "quotaAllocatedPercentage", "value": { "type": "string", "value": "27% (6 of 22)" } }, { "key": "storageUsed", "value": { "type": "integer", "value": 1006 } }, { "key": "machinesAllocatedPercentage", "value": { "type": "string", "value": "86% (6 of 7)" } }, { "key": "memoryAllocatedPercentage", "value": { "type": "string", "value": "20% (72 GB of 352 GB)" } }, { "key": "computeResource", "value": { "type": "string", "value": "Data-Cluster" } }, { "key": "storageAllocatedPercentage", "value": { "type": "string", "value": "23% (1006 GB of 4400 GB)" } }, { "key": "storageUsedPercentage", "value": { "type": "string", "value": "23% (1006 GB of 4400 GB)" } } ] } } I need data in table format. Can you help me on this.
sort -date | dedup Date_Month_Year | where Date>1575183600 I need this query to run only for the past 120 days from today. I can put in the date manually as above, but need this to be more aut... See more...
sort -date | dedup Date_Month_Year | where Date>1575183600 I need this query to run only for the past 120 days from today. I can put in the date manually as above, but need this to be more automated so anyone can run this query and get results for the current to 120 day range. I have the following fields: Date Date_Friendly Date_Month_Year Host_Count 15786200 01/01/2020 January 2020 1234 I have tried 2 things and neither works. where (strptime(Date, "%m/%d/%Y")>=strptime("4/2/2018", "%m/%d/%Y")) AND (strptime(Date, "%m/%d/%Y")>=strptime("4/10/2018", "%m/%d/%Y")) | eval Date="1/1/2020" | eval timestampDate=strptime(Date, "%m/%d/%Y") | eval timestampStart=strptime("1/1/2020", "%m/%d/%Y") | eval timestampEnd=strptime("5/1/2020", "%m/%d/%Y") | eval formattedTimestamp = strftime(timestamp,"%Y-%m-%dT%H:%M:%S") | where timestampDate >= timestampStart AND timestampDate <= timestampEnd
I want to create multiple charts that have the same time range. That way I can correlate between the two. For instance a CPU utilization in one chart with a second change that shows number of messa... See more...
I want to create multiple charts that have the same time range. That way I can correlate between the two. For instance a CPU utilization in one chart with a second change that shows number of messages filterable by messages. Then I can select a component and see if the number of messages correlate to a CPU spike. Ideally you could zoom in and all charts would update. I was able to come close by creating a query that determined the range of time, and saving the min and max time frame. However something like charting.axisX.minimumNumber doesn't work for timeline charts. Is there anyway to accomplish this? Thanks -Paul
Hi All, Looking for some help troubleshooting some odd behaviour around storing IOCs from a custom URL-based Threat Intelligence feed. We have successfully set it up to a point where we can rece... See more...
Hi All, Looking for some help troubleshooting some odd behaviour around storing IOCs from a custom URL-based Threat Intelligence feed. We have successfully set it up to a point where we can receive the IOCs (in 2hr intervals), store them and search with them. But the IOCs seem to randomly disappear. One moment we may have 5000+ IOCs, the next we may have 0 or 2000 or 4000. Our Threat Intelligence Management page states that the max size DA-ESS-ThreatIntelligence is 100MB and I haven't seen the threat_intel files pass 40MB Any help troubleshooting this issue is appreciated!
Hey Splunkers, Just wondering if anyone had some cool suggestions for better disk metrics We are currently using %_disk_time (among others) for performance monitoring for our hosts While it ... See more...
Hey Splunkers, Just wondering if anyone had some cool suggestions for better disk metrics We are currently using %_disk_time (among others) for performance monitoring for our hosts While it may be somewhat useful to know that drives are 100% busy, that fact in and of itself, is not as useful. I would expect that drives holding data would be 100% busy because the drive is being read from or written to almost all of the time, especially on some of our more heavily used systems Just from my own basic knowledge, that metric combined with “Average Disk Que Length”, would be more relevant. If a disk is busy almost all of the time, and there is a large queue, the disk might be a bottleneck, and require further investigation. However, I imagine RAID configuration needs to be factored in (which I'm not sure about) -and I'm wondering how others are doing it. Any help is much appreciated I'm currently playing around with it like this: index=windows sourcetype="PerfmonMk:LogicalDisk" | stats avg(Current_Disk_Queue_Length) as average by host instance | search average>1 | sort - average
I have a chart count of Index using License usage using the below search. The search works fine but how to convert the usage from KB to MB. index=_internal source=license_usage.log type="usage" id... See more...
I have a chart count of Index using License usage using the below search. The search works fine but how to convert the usage from KB to MB. index=_internal source=license_usage.log type="usage" idx=* earliest_time=-7d@d | convert timeformat="%F" ctime(_time) | chart count over idx by _time | eval Time=strftime(_time,"%m-%d") Thank you
I'm looking to calculate the elapsed time between 2 events of different types that potentially share a common value but in a different field. The format is something like this: Event1: eventtype=e... See more...
I'm looking to calculate the elapsed time between 2 events of different types that potentially share a common value but in a different field. The format is something like this: Event1: eventtype=export_start, selected_WO=XXXXXX Event2: eventtype=export_in_progress, period_WO=XXXXXX For successful exports, there will be an export_in_progress event that has a period_WO value matching export_start event's selected_WO value. I would like to be able to calculate the time elapsed between the export_start event and the export_in_progress event with the same value in their respective fields. Note: the events are different enough in format that both WO's couldn't be extracted using the same field What I have: index=... eventtype="export_start" | eval workorder=selected_WO | append [ search index=... eventtype="export_in_progress" | eval workorder=period_WO ] | stats count by workorder _time | sort - _time This gives me one long list of all workorder numbers that appear in either field and their timestamps sorted descending order. As an export times out after 30 seconds, my thought was to try and match duplicate workorder numbers within a 30 second time period and calculate the elapsed time. If this method is reasonable, some help on the last matching piece would be much appreciated. If there is a simpler method to accomplish the same thing, that would be even better! Thanks in advance.
I am attempting to filter an eventID 5156 with an application name of "\device\harddiskvolume5\program files\bonjour\mdnsresponder.exe" I am using a Universal Forwarder but I am seeing mixed response... See more...
I am attempting to filter an eventID 5156 with an application name of "\device\harddiskvolume5\program files\bonjour\mdnsresponder.exe" I am using a Universal Forwarder but I am seeing mixed responses saying this is not possible on universal Forwarder. My Universal Forwarders point to my Indexer.