All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good day,   The URL http://127.0.0.1:8000/en-US/account/login?return_to=%2Fen-US%2F to Splunk is not working. Has the URL changed, or am I doing something wrong? Please help.   Thanks,
ALCON, Hello, I am having issues with printmon query results not showing the proper results for "total_pages".  The page_printed is always equal to zero (0). Moreover, total_pages value is also not ... See more...
ALCON, Hello, I am having issues with printmon query results not showing the proper results for "total_pages".  The page_printed is always equal to zero (0). Moreover, total_pages value is also not right as when I print 5 pages it is telling only 1. Any solution to that? One Example Query: (ALL "printmon" Queries give me the same inaccurate results) Index=wineventlog eventtype=printmon_windows (host=”Printer Name” OR host=”Printer Name”) user=”If looking for specific user info” | table _time, user, document, machine, printer, driver_name, total-pages, size_bytes | rename user as “User”, document as “Document”, machine as “Host”, printer as “Location”, driver_name as “Driver”, total_pages as “Total Pages”, size_bytes as “Bytes” | dedup document | sort - _time   Other Links about Subject but old info without any solution or fix: 1. WinPrintMon not logging page_printed correctly (‎24May2015) Link: https://community.splunk.com/t5/Getting-Data-In/WinPrintMon-not-logging-page-printed-correctly/m-p/121725 2. 1winprintmon search results aren't showing the proper results for "total_pages" (20Feb2019 at 0826) Link: https://community.splunk.com/t5/Splunk-Search/winprintmon-search-results-aren-t-showing-the-proper-results-for/m-p/392683#M172918   Please provide example query or where to find the fix.
Hello, I have a montly report that is huge (300 MB approx) and would like it to be exported to an external SFTP Server. I do not see any such option in Report Actions at present. Any ideas in how th... See more...
Hello, I have a montly report that is huge (300 MB approx) and would like it to be exported to an external SFTP Server. I do not see any such option in Report Actions at present. Any ideas in how this can be achieved would be of great help. I know it can be done using a custom script that copies the results to the SFTP Server from a specific path after the results are dumped to a lookup, but I want to explore other direct integration option.
Hi Team i am trying to make below field regex which is coming in every single event. but its not allowing me to use same field name for 2 same type of entry as they coming in same single event. ... See more...
Hi Team i am trying to make below field regex which is coming in every single event. but its not allowing me to use same field name for 2 same type of entry as they coming in same single event. for example: { "class1": { "student1": "123 rollnumber" }, "class2": { "student1": "123 rollno", "student2": "321 rollno" } } 1)class1 and class2 should be under Class field if i search for class1 i should only find student 1 and related info.  and  if i search for class3  i should only find student 1 and related info.  they will be in the field like class, student, number, and type of number  Class field class1 class2   student name student1 student1   number 123 123 321 type of number  rollnumber rollno rollno
Can we create a new field which contains the group of multiple servers name and that field I can use directly in all the query like report, alerts and so I no need to search for the server names all ... See more...
Can we create a new field which contains the group of multiple servers name and that field I can use directly in all the query like report, alerts and so I no need to search for the server names all the time and I can just use the created one field directly. For example index=* sourcetype=* host=X So here I want to create x=Server A + Server B + Server C. Is this possible in Splunk ? 
Hello, Anyone knows if it possible to remove/delete a smart agent from the controller UI? - that is not in use anymore. we used it for testing, and now want to remove it from the controller UI.
I have a Dashboard Studio Dashboard and want to set a token from an input (like text input or dropdown input) triggered by the interaction with another element within the dashboard.   I already tri... See more...
I have a Dashboard Studio Dashboard and want to set a token from an input (like text input or dropdown input) triggered by the interaction with another element within the dashboard.   I already tried to do that with the interaction --> Set Token option and specified the token name as "form.tokenname". This did not work, the value of the token was not changed.   Is there a way to achieve that in Dashboard Studio like it works in Classic XML Dashboard by setting the token with "form.tokenname"?
Hello Everyone, I've an issue when getting data in from my linux to Splunk Observability. So i've ubuntu 23 that is have nginx service, i want monitor nginx to the Splunk APM. Because Splunk Otel Co... See more...
Hello Everyone, I've an issue when getting data in from my linux to Splunk Observability. So i've ubuntu 23 that is have nginx service, i want monitor nginx to the Splunk APM. Because Splunk Otel Collector not supported Ubuntu 23, i'm using Otel Collector Add-On in Universal Forwarder. The problem is host and service is not showing in the dashboard event when otel.log show the otel agent is running. I've done allowed port that will be using by Splunk Otel and Connection to Observability like in this doc https://docs.splunk.com/observability/en/gdi/opentelemetry/exposed-endpoints.html#otel-exposed-endpoints . I've done followed this doc https://docs.splunk.com/observability/en/gdi/monitors-hosts/nginx.html for NGINX configuration. But still the service or host is not showed in splunk observability dashboard. fyi i'm still using trial version so i hope anyone here can help me solve this.    i'm still not found anything why the host not showed up yet, but i think the issue for service nginx not showed is this, this is from Splunk_TA_Otel.log     Thanks
Hi  Lookup table doesn't contain the current version of the forwarder. Instead, the highest ever seen version is stored.  For example, if forwarders were downgraded, it's necessary rebuild the forw... See more...
Hi  Lookup table doesn't contain the current version of the forwarder. Instead, the highest ever seen version is stored.  For example, if forwarders were downgraded, it's necessary rebuild the forwarder assets table manually. Issues are caused by line 4 "max(version) as version" in search "DMC Forwarder - Build Asset Table" `dmc_set_index_internal` sourcetype=splunkd group=tcpin_connections (connectionType=cooked OR connectionType=cookedSSL) fwdType=* guid=* | stats values(fwdType) as forwarder_type, latest(version) as version, values(arch) as arch, values(os) as os, max(_time) as last_connected, sum(kb) as new_sum_kb, sparkline(avg(tcp_KBps), $sparkline_span$) as new_avg_tcp_kbps_sparkline, avg(tcp_KBps) as new_avg_tcp_kbps, avg(tcp_eps) as new_avg_tcp_eps by guid, hostname | inputlookup append=true dmc_forwarder_assets | stats values(forwarder_type) as forwarder_type, max(version) as version, values(arch) as arch, values(os) as os, max(last_connected) as last_connected, values(new_sum_kb) as sum_kb, values(new_avg_tcp_kbps_sparkline) as avg_tcp_kbps_sparkline, values(new_avg_tcp_kbps) as avg_tcp_kbps, values(new_avg_tcp_eps) as avg_tcp_eps by guid, hostname | addinfo | eval status = if(isnull(sum_kb) or (sum_kb <= 0) or (last_connected < (info_max_time - 900)), "missing", "active") | eval sum_kb = round(sum_kb, 2) | eval avg_tcp_kbps = round(avg_tcp_kbps, 2) | eval avg_tcp_eps = round(avg_tcp_eps, 2) | fields guid, hostname, forwarder_type, version, arch, os, status, last_connected, sum_kb, avg_tcp_kbps_sparkline, avg_tcp_kbps, avg_tcp_eps | outputlookup dmc_forwarder_assets  
  below is error, how to fix this? 2024-08-05 21:46:52,757 ERROR pid=2311415 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last)... See more...
  below is error, how to fix this? 2024-08-05 21:46:52,757 ERROR pid=2311415 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-palo_alto_cortex_xdr_alert_retriever/bin/ta_palo_alto_cortex_xdr_alert_retriever/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-palo_alto_cortex_xdr_alert_retriever/bin/cortex_xdr_alerts.py", line 64, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-palo_alto_cortex_xdr_alert_retriever/bin/input_module_cortex_xdr_alerts.py", line 153, in collect_events left_to_return = total_endpoints - result_count UnboundLocalError: local variable 'total_endpoints' referenced before assignment
Hi Splunk Experts, I'm not sure how easy it's using Splunk, I've a field (_time) with list of epoch_time values in it. I want to loop through each value and run a search using the time value in belo... See more...
Hi Splunk Experts, I'm not sure how easy it's using Splunk, I've a field (_time) with list of epoch_time values in it. I want to loop through each value and run a search using the time value in below query by replacing $_time$. Any advice would be much appreciated, Thanks in advance!! _time 1722888000 1722888600 1722889200 1722889800 1722890400 1722891000   index=main earliest=$_time$-3600 latest=$_time$ user arrival | timechart span=10m count by user limit=15 | untable _time user value | join type=left user [| inputlookup UserDetails.csv | eval DateStart=strftime(relative_time($_time$), "-7d@d"), "%Y-%m-%d") | where Date > DateStart]    
I am beginner learning splunk , I uploaded a file that contain simulated data for analysis but i am getting a wrong data on splunk. But the data is correct when i tried opening it with 
null
Is it possible to perform "left join" lookup from CSV to an index? Usually lookup start with index, then CSV file and the result only produces the correlation (inner join)     index= owner | lo... See more...
Is it possible to perform "left join" lookup from CSV to an index? Usually lookup start with index, then CSV file and the result only produces the correlation (inner join)     index= owner | lookup host.csv ip_address AS ip OUTPUTNEW host, owner     but I am looking for left join that still retains all of the data in host.csv Thank you for your help Please see the example below: host.csv ip_address host 10.1.1.1 host1 10.1.1.2 host2 10.1.1.3 host3 10.1.1.4 host4 index=owner ip host owner 10.1.1.3 host3 owner3 10.1.1.4 host4 owner4 10.1.1.5 host5 owner5   left join "lookup" (expected output) - yellow and green circle (see drawing below) ip_address host owner 10.1.1.1 host1   10.1.1.2 host2   10.1.1.3 host3 owner3 10.1.1.4 host4 owner4 normal "inner join" lookup   index first, then CSV - green circle ip_address host owner 10.1.1.3 host3 owner3 10.1.1.4 host4 owner4      
Hello friends, I am trying to create a heat map where I can see the indexes on the left side and in each cell of the heat map see the host with its health color, to make monitoring more practical, bu... See more...
Hello friends, I am trying to create a heat map where I can see the indexes on the left side and in each cell of the heat map see the host with its health color, to make monitoring more practical, but I can't get it to work. Has anyone attempted this search and can help me...
We have a legacy all-in-one Splunk server. We have built out a new distributed environment that has an indexer cluster. We are phasing everything over to the new environment. As such the new search h... See more...
We have a legacy all-in-one Splunk server. We have built out a new distributed environment that has an indexer cluster. We are phasing everything over to the new environment. As such the new search head cluster can search both the new indexers and the old all-in-one server. We plan on migrating all the UFs to start sending their logs to the new indexers soon. I have also set up the old server to be able to search the new indexers so that alerts and reports on that server will still continue to work once the inputs are sent to the new indexers. That way we can handle report migration separately and not have to do everything in one fell swoop. I was also planning on changing the old indexer into a heavy forwarder once the UFs are repointed so that any scripted inputs it has or other kinds of inputs will be sent to the new indexers. My question is if I flip it to be a heavy forwarder I know any new inputs it gets will get sent to the new indexers which should be find for alerts and reports as it can also search the new indexers. However all the existing stored logs I know those will not be migrated into the new indexers but will the old server still be able to search those logs once it is a heavy forwarder? Will it still search it's own locally indexed data as well as the new indexers? Thanks.
For some reason my |tstats count query is returning a result of 0 when I add an OR condition in my where clause if the field doesn't exist in the dataset, or if the OR condition specifies a string va... See more...
For some reason my |tstats count query is returning a result of 0 when I add an OR condition in my where clause if the field doesn't exist in the dataset, or if the OR condition specifies a string value when the value for the field in the data is always an integer. For example: This query returns the correct event count (or at least it's non-zero):   |tstats count where index="my_index" eventOrigin="api" (accountId="8674756857")     Adding this OR condition returns a count of zero -- why? Note that for this time range there are no events with a serviceType field, but for other time ranges there are events with a serviceType field.   |tstats count where index="my_index" eventOrigin="api" (accountId="8674756857" OR serviceType="unmanaged")     Adding this OR condition also returns zero -- why? It's true that accountId should normally be an integer, but it's an OR, so I still expect it to count those events.   |tstats count where index="my_index" eventOrigin="api" (accountId="19783038942" OR accountId="aaa")     Using a * results in the same non-zero count as the first query, which is expected, even though there are no events with a serviceType field:   |tstats count where index="my_index" eventOrigin="api" (accountId="8674756857" OR serviceType="unmana*")     Why would adding an OR condition in tstats cause the count to be zero? The same problem does not occur with a regular search query. I am on Splunk 9.1.0.2.
I have a data set for web traffic.  A sessionID ties all traffic for an individual browsing session together - all events from the time you open a new tab until you close it.  There is also a memberI... See more...
I have a data set for web traffic.  A sessionID ties all traffic for an individual browsing session together - all events from the time you open a new tab until you close it.  There is also a memberID field.  Some records do NOT have a value populated for memberID, so I want to return each record for the sessionID with the FIRST non-null value found in the sessionID.  For instance time sessionID memberID evalField 12:01 1 <NULL> abc 12:02 1 <NULL> abc 12:03 1 abc abc 12:04 1 <NULL> abc Can someone help me out with how to get this evalField column?  Thanks so much!
Hi Team, We are trying to upgrade Splunk OpenTelemetry Collector to the version v0.104. So, we wanted to know its compatibility with the Kubernetes version v.1.29. Kindly assist. Thanks
Hello,   Ive tried to create a Pie Chart depicting the different Disks and it's free/used space. via trellis I want to show it by instance (the disk in question for example c:/)  now I've used t... See more...
Hello,   Ive tried to create a Pie Chart depicting the different Disks and it's free/used space. via trellis I want to show it by instance (the disk in question for example c:/)  now I've used the following spl to find the needed values for free and used(full) diskspace but it doesn't give me a correct pie chart. I'm fairly sure it is because I need to turn the headers or the fields in a way to be usable by the pie chart but I can't seem to find a good way how.    can anyone help me ?    Thanks a lot! André