All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've a use case, where I need to select some domain values via dropdown. For individual item selection in dropdown i'm able to select individual domain and get the results. but to get all the domains... See more...
I've a use case, where I need to select some domain values via dropdown. For individual item selection in dropdown i'm able to select individual domain and get the results. but to get all the domains details which means the default selection m facing issues. I have an input field as dropdown. Able to select the values from dropdown. When default selection is considered, I've some set of tags to be executed. and when other than default selection is picked, i need to execute some set of multiple tags must be executed. I'm not able to embed the tags into one frame and apply conditions to pick respective snippets based on the selected dropdown value. Any suggestions ?
Hello. Please help me.... I failed to get the table "sys_audit_delete" via Splunk Add-on for ServiceNow. I succeeded in getting "sysevent"and"sys_update_xml". I found the following error in ... See more...
Hello. Please help me.... I failed to get the table "sys_audit_delete" via Splunk Add-on for ServiceNow. I succeeded in getting "sysevent"and"sys_update_xml". I found the following error in "splunk_ta_snow_main.log" What kind of error is this? (SSLError: ('The read operation timed out',)) What should I do ? =================================================================================================================================== 2020-03-10 12:03:18,680 ERROR pid=2056 tid=Thread-23 file=snow_data_loader.py:do_collect:177 | Failure occurred while connecting to https://●●●●●●.service-now.com/api/now/table/sys_audit_delete?sysparm_display_value=all&sysparm_limit=1000&sysparm_exclude_reference_link=true&sysparm_query=sys_updated_on>=2020-02-25+00:00:00^ORDERBYsys_updated_on. The reason for failure=Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\Splunk_TA_snow\bin\snow_data_loader.py", line 169, in _do_collect "Authorization": "Basic %s" % credentials File "C:\Program Files\Splunk\etc\apps\Splunk_TA_snow\bin\Splunk_TA_snow\httplib2_helper\httplib2_py2\httplib2__init.py", line 2135, in request cachekey, File "C:\Program Files\Splunk\etc\apps\Splunk_TA_snow\bin\Splunk_TA_snow\httplib2_helper\httplib2_py2\httplib2__init.py", line 1796, in _request conn, request_uri, method, body, headers File "C:\Program Files\Splunk\etc\apps\Splunk_TA_snow\bin\Splunk_TA_snow\httplib2_helper\httplib2_py2\httplib2__init_.py", line 1737, in _conn_request response = conn.getresponse() File "C:\Program Files\Splunk\Python-2.7\Lib\httplib.py", line 1121, in getresponse response.begin() File "C:\Program Files\Splunk\Python-2.7\Lib\httplib.py", line 438, in begin version, status, reason = self._read_status() File "C:\Program Files\Splunk\Python-2.7\Lib\httplib.py", line 394, in _read_status line = self.fp.readline(_MAXLINE + 1) File "C:\Program Files\Splunk\Python-2.7\Lib\socket.py", line 480, in readline data = self._sock.recv(self._rbufsize) File "C:\Program Files\Splunk\Python-2.7\Lib\ssl.py", line 772, in recv return self.read(buflen) File "C:\Program Files\Splunk\Python-2.7\Lib\ssl.py", line 659, in read v = self._sslobj.read(len) SSLError: ('The read operation timed out',) .
データの追加で、モニターでディレクトリ指定にしています。 指定したフォルダの中には、同一構成の日付ごとのデータが数か月分格納されています。 インポートを終えて、検索をするのですが、sourceを見ると全ファイルが取り込まれていませんでした。 取り込まれたファイルと取り込まれなかったファイルを比較しましたが、構成、文字コード、改行コードなど特に差分はありませんでした。 取り込... See more...
データの追加で、モニターでディレクトリ指定にしています。 指定したフォルダの中には、同一構成の日付ごとのデータが数か月分格納されています。 インポートを終えて、検索をするのですが、sourceを見ると全ファイルが取り込まれていませんでした。 取り込まれたファイルと取り込まれなかったファイルを比較しましたが、構成、文字コード、改行コードなど特に差分はありませんでした。 取り込めなかったファイルをファイル単位でデータの追加をする分には問題がありません。 読み込みに時間が掛かるのかなと思い、1晩放置しましたが変わりませんでした。 また、設定の”ファイルとディレクトリ”から、モニタしているファイル数を確認すると、増減し、格納ファイル数とも一致するので、フォルダの指定が誤っているわけでもなさそうです。 可能性として、どんな理由がありますでしょうか?
Hi. I installed IMAP Mailbox app in our distributed server deployment where setting is Heavy forwarder ---> Clustered indexer ---> Search Head and followed the steps as indicated on splunkbase.... See more...
Hi. I installed IMAP Mailbox app in our distributed server deployment where setting is Heavy forwarder ---> Clustered indexer ---> Search Head and followed the steps as indicated on splunkbase. I installed it to 1 heavy forwarder --> 3 indexers ---> 1 search head. It seems to work but i noticed that it duplicates the logs every certain minutes probably 5 minutes. I was sure that i configured disabled = true on all inputs.conf other than the one in the HF. There are no imap.conf configuration on the other servers IMAPmailbox/local folder. I installed the app on all the indexer manually but i was informed that we have to create the index on a certain app that is deployed to the indexers. I had to remove all the app from the 3 indexers to comply. When I try to enable the imap.conf script again, it downloaded the email but again creating duplicates. In imap.conf i configured the following as DeleteWhenDone =False and IMAPsearch = UNDELETED. This is also the setting in my single splunk deployment and it was working fine. Do you know what causes the duplicates? Are these types of issues supported by splunk support? Regards, Ronald
I have Splunk Enterprise in my private network. I need to send the logs from AUTH0 which is in public domain to my Splunk. Is there a way to achieve this ?
Hi I have issue with timestamp, here is the problem: every day at "1 AM" all log files copy into the logserver. this logs belong to the yesterday but splunk consider today date (import date) as t... See more...
Hi I have issue with timestamp, here is the problem: every day at "1 AM" all log files copy into the logserver. this logs belong to the yesterday but splunk consider today date (import date) as timestamp! 1-as you see in screenshot logfile name is 20200310.bz2 but timestamp is 3/11/20 (log belong yesterday but import today). 2-as you see in screenshot every line start with time not date and I can't use timestamp of events. now question is how can I handle this with splunk? Thanks,
My network team upgraded our Palo Altos to 9.1.1 and it looks like Palo Alto may have changed the format of GlobalProtect logs... I'm no longer seeing activity in the data model matching the followin... See more...
My network team upgraded our Palo Altos to 9.1.1 and it looks like Palo Alto may have changed the format of GlobalProtect logs... I'm no longer seeing activity in the data model matching the following search: | tstats summariesonly=t latest(log.event_id) AS latest_event, values(log.agent_message) AS log.agent_message, values(log.src_ip) AS log.src_ip count FROM datamodel="pan_firewall" WHERE nodename="log.system.globalprotect" groupby _time log.event_id log.user | rename log.* as * | search event_id="globalprotectportal-auth-succ" Essentially, all events matching the WHERE clause: nodename="log.system.globalprotect" stopped after the upgrade. Has anyone else seen this issue? Does anyone know if there's a fix for this in the works?
I have an XML form that has a select box control that allows users to select the fields they want displayed in the output table. The selected options are stored in a token called "$fields$". The in... See more...
I have an XML form that has a select box control that allows users to select the fields they want displayed in the output table. The selected options are stored in a token called "$fields$". The input is set to search on change. In the dashboard, if I use |table $fields$ at the end of my search, the results table is updated every time a field is checked or unchecked, which is the action I'm looking for. However, the limitation of this is that I can't remove fields the users don't need to see if I want to keep them for drilldown. As an alternative, I tried adding the fields tag, with $fields$ for the list and removing the table command. This effectively works at hiding the extra columns and keeping the data so that I can use it for drilldown, but it doesn't update dynamically as they are selecting/unselecting the field names. I want the best of both worlds - I want the fields to update dynamically as clicked and to be able to retain data in the row for drilldown features. Is there a way to do this? It's multiple fields, but I'll always be able to control the field names. I'm open to using javascript on it if needed - I just haven't been able to figure out how. A limited version of my xml is below (with some choices removed, just for brevity) ______________________The select box______________________ <input type="checkbox" id="input_checkbox_horizontal1" searchWhenChanged="true" depends="$vsmacro$" token="fields"> <label>Select Fields to Display</label> <choice value="&quot;VM Team Message&quot;">VM Team Message</choice> <choice value="&quot;Last Observed&quot;">Last Observed</choice> <choice value="&quot;Severity&quot;">Severity</choice> <choice value="&quot;IP Address&quot;">IP Address</choice> <choice value="&quot;See Also&quot;">See Also</choice> <choice value="&quot;CVSS Base Score&quot;">CVSS Base Score</choice> Notes&quot;,&quot;Note Expiration&quot;">SLM Notes Information</choice> <delimiter>,</delimiter> <default>"""Last Observed""","""Severity""","""IP Address"""</default> <initialValue>"Last Observed","Severity","IP Address",</initialValue> </input> _____________________the tables option________________ <query>[...a working search....] |table Directives $fields$ </query> This will automatically update (I'd assume because of the searchWhenChanged control on the box.) _____________the fields option ________________ <search id="MySearch" base="BaseSearch"> <query>|[...a working search...] </query> </search> <option name="count">10</option> [...removed a bunch of other "option" tags"...] <fields>$fields$</fields> <drilldown> [...removed all the drilldown conditions...] </drilldown> </table> Any help you might be able to provide is appreciated!!!!
Hi community I know that it works perfectly on ON PREM env. but for Cloud has being a challenge to find a sustainable solution We are looking for a way to project images on dashboards built in a ... See more...
Hi community I know that it works perfectly on ON PREM env. but for Cloud has being a challenge to find a sustainable solution We are looking for a way to project images on dashboards built in a Splunk Cloud env Did anyone have this challenge? How it was?
I am having a problem using a date range. If I run the search below it returns 2 events and a count of 496 index="test2" (cRecords{}.bDate = "05/16/2019" OR cRecords{}.bDate = "05/17/2019") | s... See more...
I am having a problem using a date range. If I run the search below it returns 2 events and a count of 496 index="test2" (cRecords{}.bDate = "05/16/2019" OR cRecords{}.bDate = "05/17/2019") | stats count(cRecords{}.bDate) If I run the search below it returns 0 events and a count of 0 index="test2" | spath output=bDate path=cRecords{}.bDate | eval beginDate = strptime("05/16/2019","%m/%d/%Y") | eval endDate = strptime("05/17/2019","%m/%d/%Y") | eval thisDate = strptime(bDate, "%m/%d/%Y") | where thisDate >= beginDate AND thisDate <= endDate | stats count(bDate) I think these searches should return the same results. I know the first search is correct. Why is the second search returning nothing when I attempt to use a date range? Thank you for any information you could provide.
Using Boomi for first time as the platform for our website. We also want to use Splunk for web monitoring. Has anyone used both Boomi and Splunk together?
Hi, i am trying to find failed and success from all users with single ip. so it would show like.. 1p 1.1.1.1...users john doe 4 failed and 1 success jay sean... See more...
Hi, i am trying to find failed and success from all users with single ip. so it would show like.. 1p 1.1.1.1...users john doe 4 failed and 1 success jay sean 5 failed 2 success ip1.2.3.4 ...user tim smith 3 failed 1 sucess starting query - index=authentication type=* |.... type is type of failure
Hi, I am looking forward to create a bubble chart like this: https://www.highcharts.com/demo/bubble, where I can click and drag your mouse pointer to make a box in the chart. Every value in the bo... See more...
Hi, I am looking forward to create a bubble chart like this: https://www.highcharts.com/demo/bubble, where I can click and drag your mouse pointer to make a box in the chart. Every value in the box inside the chart, I would be able to pass them as filters to subsequent charts. In the case of the link (https://www.highcharts.com/demo/bubble), you can select a bunch of bubbles (countries) and these countries should act as filters to subsequent charts. Any help? Note: I can do drilldown from one chart to another chart based on a single bar selected (in case of bar chart). I am more interested in the multiselect. Example in D3js: https://observablehq.com/@d3/brushable-scatterplot-matrix https://bl.ocks.org/mthh/e9ebf74e8c8098e67ca0a582b30b0bf0
I want to index all *.log files recursively from /var/log I followed this instruction https://docs.splunk.com/Documentation/Splunk/8.0.2/Data/Specifyinputpathswithwildcards My inputs.conf looks... See more...
I want to index all *.log files recursively from /var/log I followed this instruction https://docs.splunk.com/Documentation/Splunk/8.0.2/Data/Specifyinputpathswithwildcards My inputs.conf looks like this: [monitor:///var/log/] whitelist=\.log$ recursive=true disabled = false index = rpi_logs sourcetype = linux_logs It seems to be indexing only /var/log/daemon.log and var/log/auth.log But I also have log files in /var/log/mysql and /var/log/nginx directories and those are omitted. What am I doing wrong?
When someone gets activated and deactivated this data is consolidated -- always. My question is how can I separate this data based on timestamps? If someone were to get deactivated then reacti... See more...
When someone gets activated and deactivated this data is consolidated -- always. My question is how can I separate this data based on timestamps? If someone were to get deactivated then reactivated afterward, how can I not include this in an input-look up? I have a search that I am using but because I am running into false positives due to what I mentioned above, sometimes someone will get re-activated after deactivation and I need Splunk to differentiate after someone getting re-activated after deactivation is not an alert or an actionable offense. What commands or other parts can I use? Thank you. Search: index="AD" index="ADUser" eventType="SSO" OR eventType="start" | eval from="search" | append [| inputlookup users.csv | table users | eval from="lookup"] | stats values(from) as from by users | where mvcount(from)=2 AND from="lookup
I'm getting ready to upgrade my cluster from 6.4.2 to 6.6.12 to 7.3.4. However, my Replication Factor and Search Factor are net met. Is it OK to proceed with the upgrade anyway? If not, what can be d... See more...
I'm getting ready to upgrade my cluster from 6.4.2 to 6.6.12 to 7.3.4. However, my Replication Factor and Search Factor are net met. Is it OK to proceed with the upgrade anyway? If not, what can be done to fix it? I've searched for resolutions but have not found anything that applies or works yet. RF=2 SF=2 I have a master, 2 indexers/peers and 2 search heads Fixup Tasks in Progress=0 Fixup Tasks Pending=12737 Excess Buckets=0 Most fixup statuses are "cannot fix up search factor as bucket is not serviceable" or "Missing enough suitable candidates to create searchable copy in order to meet replication policy. Missing={ default:1 }"
I'm trying to build a simple app to deploy to all of my Windows hosts to collect the CPU, Memory, HDD, etc that Im looking for using SAI. Created the app and deployed using forwarders... working grea... See more...
I'm trying to build a simple app to deploy to all of my Windows hosts to collect the CPU, Memory, HDD, etc that Im looking for using SAI. Created the app and deployed using forwarders... working great. Only "dimension" i'm collecting at the moment is "entity_type::Windows_Host" but wanting to add more dimensions once inside the SAI app. Adding application names, server classes, etc AFTER I've already ingested the raw data. Is that possible? Part of my inputs.conf (for example): [perfmon://PhysicalDisk] counters = % Disk Read Time;% Disk Write Time instances = * interval = 30 mode = single object = PhysicalDisk index = em_metrics _meta = entity_type::Windows_Host useEnglishOnly = true sourcetype = PerfmonMetrics:PhysicalDisk disabled = 0 Reason being? I want 1 app to deploy to all 10k windows forwarders to collect the data I need, and not having to build customer server classes and apps for every possible bundle of servers and label the dimensions in each app. Would be a huge time saver and more dynamic. Ideas??
{ "message": { "correlation": "12345678", "headers": {}, "protocol": "HTTP/1.1", "remote": "111.11.11.111", "requestMethod": "GET", "requestPath": "/abc/<dynamic_value>/xyz... See more...
{ "message": { "correlation": "12345678", "headers": {}, "protocol": "HTTP/1.1", "remote": "111.11.11.111", "requestMethod": "GET", "requestPath": "/abc/<dynamic_value>/xyz", "type": "request" } } Here from "message.requestMethod" and "message.requestPath" I need to find unique combinations and give some name to it. "message.requestPath" can have different endpoints. Tried something like below which is not working: searchquery | eval api = case (message.requestMethod = GET AND message.requestPath="/abc/<dynamic_value>/xyz", "GET_VERSION_API", message.requestMethod = POST AND message.requestPath="/abc/<dynamic_value>/xyz", "POST_VERSION_API", 1 = 1, "default") | stats count by api
I am attempting to display unique values in a table. Some of the fields are empty and some are populated with the respected data. For example, I only want the following unique fields from each ... See more...
I am attempting to display unique values in a table. Some of the fields are empty and some are populated with the respected data. For example, I only want the following unique fields from each of the events: systemname | domain | os system1 | abc.com | Windows 10 system2 | | Windows 7 system3 | abc.com | Windows 10 system4 | abc.com | Windows 7 system1 | abc.com | Windows 10 system2 | | Windows 7 system3 | abc.com | Windows 10 system4 | abc.com | Windows 7 When I run the command: | dedup systemname, domain, os | table systemname, domain, os I get the following results: system1 | abc.com | Windows 10 system3 | abc.com | Windows 10 system4 | abc.com | Windows 7 The desired result is: system1 | abc.com | Windows 10 system2 | | Windows 7 system3 | abc.com | Windows 10 system4 | abc.com | Windows 7 It is not listing the data with the blank field. I tried various options with using the dedup command such as keepempty=true, but that is not working. I have also tried uniq, but my understanding is that compares the entire record, which is not what I want.
I have a script for Linux that executes "sar -n DEV" and formats the output to look like: Linux <kernel version> (<hostname>) <date> <arch> (<#> CPU) Average: <interface> <field1> ... See more...
I have a script for Linux that executes "sar -n DEV" and formats the output to look like: Linux <kernel version> (<hostname>) <date> <arch> (<#> CPU) Average: <interface> <field1> <field2> <field3> Average: <interface> <field1> <field2> <field3> Average: <interface> <field1> <field2> <field3> Using Splunk Web's field extractor, I have a regex that applies field extraction to the first "Average:" line. How do I make it so the field is applied to as many "Average:" lines exist?