All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, How can we install the Splunk Enterprise Compatibility app on Splunk Cloud? Are there any modifications needed to ensure it's compatible with Splunk Cloud?
Here are the screenshots: In incident review setting, I have already labeled signature: Then in Correlation Search content setting, also I have setting the search query which could result in fi... See more...
Here are the screenshots: In incident review setting, I have already labeled signature: Then in Correlation Search content setting, also I have setting the search query which could result in fields with signature. This search can be run normally in search head and show the result I want. But here related to drill-down search or description, this $signature$ can not show in notable of incident review:   May I ask how to solve this issue?
We have a sandbox environment  with vpsphere and it works mostly just fine we believe the time sync is corect because we have it set to use internet to auto update and for the sake or being free of ... See more...
We have a sandbox environment  with vpsphere and it works mostly just fine we believe the time sync is corect because we have it set to use internet to auto update and for the sake or being free of errors we have disabled firewalld. (this is a  mostly linux env) howerever we are getting the following erorrs see attached
this query showing date &time haphazardly, how to sort it like 1/4/2024, 1/3/2024, 1/2/2024.... index="*" source="*" |eval timestamp=strftime(_time, "%m/%d/%Y") | chart limit=30 count as count ... See more...
this query showing date &time haphazardly, how to sort it like 1/4/2024, 1/3/2024, 1/2/2024.... index="*" source="*" |eval timestamp=strftime(_time, "%m/%d/%Y") | chart limit=30 count as count over DFOINTERFACE by timestamp    
Per documentation: https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/ChartConfigurationReference The property charting.chart.showDataLabels only allow the type (all | minmax | none). I am atte... See more...
Per documentation: https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/ChartConfigurationReference The property charting.chart.showDataLabels only allow the type (all | minmax | none). I am attempting to hide data labels for a specific field, but enable data labels for other specified fields. I am attempting to do something similar to charting.fieldColors which uses maps, but the types are obviously not accepted for the showDataLabels property:   <option name="charting.chart.showDataLabels"> {"field1":none, "field2":all} </option>   Is there a workaround possible for this objective?
I have a report that lists malware received by email that is part of a dashboard. Some months the list for each person can have dozens of events listed. Management would like to only show the latest ... See more...
I have a report that lists malware received by email that is part of a dashboard. Some months the list for each person can have dozens of events listed. Management would like to only show the latest 5 events for each person. I'm having difficulty finding a good way to accomplish this. Search: index="my_index" [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS msg.parsedAddresses.to{}] final_module="av" final_action="discard" | rename msg.parsedAddresses.to{} AS To, envelope.from AS From, msg.header.subject AS Subject, filter.modules.av.virusNames{} AS Virus_Type | eval Time=strftime(_time,"%H:%M:%S %m/%d/%y") | stats count, list(From) as From, list(Subject) as Subject, list(Time) as Time, list(Virus_Type) as Virus_Type by To | search [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS To] | sort -count | table Time,To,From,Subject,Virus_Type | head 5 Current Output: time - user1 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time - user2 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time - user3 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B I'd like to limit it to the latest 5 events by user time - user1 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time - user2 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time - user3 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B Any help greatly appreciated! Thank you!  
Hello, I have a standalone Splunk Enterprise system (version 9.x) with 10 UFs reporting (Splunk Enterprise and the UFs are all Windows OSs) - the Splunk Enterprise standalone system is an all-in-one... See more...
Hello, I have a standalone Splunk Enterprise system (version 9.x) with 10 UFs reporting (Splunk Enterprise and the UFs are all Windows OSs) - the Splunk Enterprise standalone system is an all-in-one: indexer, search head, deployment server, license manager, monitoring console... I created a deployment app which to push out a standard outputs.conf file to all the UFs and it pushed out successfully, just like all the other deployment apps.  I deleted the ~etc\system\local\outputs.conf from the UFs, restarted Splunk UF, made sure that the deployment app showed up in ~etc\apps\ (it did).  But now that the outputs.conf is no longer in ~etc\system\local, I'm getting this: WARN AutoLoadBalancedConnectionStrategy [pid TcpOutEloop] - cooked connection to ip=<xx.xx.xxx.xxx>:9997 timed out  I've made sure there isn't any other outputs.conf, especially not in ~etc\system\local it that it doesn't mess with the order of precedence, restared the UF, and everytime I get the same Warning...and of course, the logs aren't being sent to the indexer.  But it does still phone home, but no actual logs. When I run: btool --debut outputs.conf list  I don't get any output. But as soon as I get rid of this deployment app and put the same outputs.conf file back in ~etc\system\local, restart the UF, logs are being sent to the indexer.  And my deployment app's structure is the same as the other deployment apps that do work...What am I doing wrong? Thanks.
  I currently find myself collecting logs using the windows universal forwarder, my client has requested a copy of the logs that have been collected from the windows sources for the last 2 months. ... See more...
  I currently find myself collecting logs using the windows universal forwarder, my client has requested a copy of the logs that have been collected from the windows sources for the last 2 months. Is there any way to access this information or the only way is to run a query like index=main |fields _raw    
Is it possible to create a Splunk App with trial feature? Trial in the sense that it will run for x days with full features (trial time) and after x days, if client-code/pass (or some kind of licens... See more...
Is it possible to create a Splunk App with trial feature? Trial in the sense that it will run for x days with full features (trial time) and after x days, if client-code/pass (or some kind of license) is not provided by user, it stops working or continues with reduced features? Where can I get any instruction on how to do this? If possible, can such an App be published at Splunkbase? best regards Altin
While I'm trying to upload my csv file as lookup, encountering the error like  - "Encountered the following error while trying to save: File has no line endings" I had tried by removing extra spac... See more...
While I'm trying to upload my csv file as lookup, encountering the error like  - "Encountered the following error while trying to save: File has no line endings" I had tried by removing extra space and special characters from the header but still facing this issue. Along with that I had tried saving as different format of csv's line utf-8, csv-ms doc etc., but NO LUCK 
Hi, We set up Security Command Center to send alerts to Splunk for detecting mining activity. However, I've observed that we're not receiving SCC logs in Splunk at the moment. What steps can we ta... See more...
Hi, We set up Security Command Center to send alerts to Splunk for detecting mining activity. However, I've observed that we're not receiving SCC logs in Splunk at the moment. What steps can we take to resolve this issue? Thanks
Hi I want to migrate or move the Splunk instance from a Mac to a Windows Server 2019. I want to make sure this license is moved to the new machine. Is there a step-by-step process to to perform this... See more...
Hi I want to migrate or move the Splunk instance from a Mac to a Windows Server 2019. I want to make sure this license is moved to the new machine. Is there a step-by-step process to to perform this activity? Thanks.
Hello there, we use search filters on our role management concept. It works fine but we got stuck on the following problem: Since some of hour hosts have a physical hostname (srv1, srv2, srv3,...)... See more...
Hello there, we use search filters on our role management concept. It works fine but we got stuck on the following problem: Since some of hour hosts have a physical hostname (srv1, srv2, srv3,...) and a virtual hostname (server1-db, server2-db, server3-db, server1-web, server2-web, server3-app), we had to use a lookup table (on the search heads) in order to have the virtual names mapped to the physical hostname (which are the names identified by the splunk forwarder). Our Lookup table look like this:     sys_name,srv_name srv1,server-db1 srv2,server-db2 srv3,server-web1 srv4,server-web2 srv5,server-app1 srv6,server-app2       my Role settings look like this:     [role_metrics_db] srchFilter = index=metrics AND (host=server-db* OR srv_name=server-db*) [role_metrics_web] srchFilter = index=metrics AND (host=server-web* OR srv_name=server-web*) [role_metrics_app] srchFilter = index=metrics AND (host=server-app* OR srv_name=server-app*)     Unfortunately my search filters do not recognize either the fields "sys_name" or "srv_name".  Should the search filters be done different? Does someone had the same challenge? Any help will be appreciated. Cheers! 
I'm creating a dashboard to easily search through our web proxy logs and table out the results when troubleshooting. The issue is that sometimes the logs don't contain a destination IP, sometimes the... See more...
I'm creating a dashboard to easily search through our web proxy logs and table out the results when troubleshooting. The issue is that sometimes the logs don't contain a destination IP, sometimes they do. For the dashboard fields that you can input, one of them I want to be able to specify sometimes is the destination IP (field: dest_ip), however, the field doesn't always exist so if I use the following search (I'm excluding the tabling): index=proxy c_ip=$cip$ cs_host=$cshost$ action=$action$ dest_ip=$destip$ Dashboard values: c_ip=1.2.3.4 cs_host=* (default) action=* (default) dest_ip=* (default) It will exclude some of the logs since they don't all have the field "dest_ip" The other 3 fields exist in all logs.  In the dashboard you can input values for each of the fields.  I'm trying to allow that for dest_ip but it doesn't always exist - that's the issue I'm trying to overcome.
Hi ,  I have snow data for change requests in splunk, I want to create a dashboard which gives the average duration of change request ( from actual start date and actual end date ) for type of the ... See more...
Hi ,  I have snow data for change requests in splunk, I want to create a dashboard which gives the average duration of change request ( from actual start date and actual end date ) for type of the change .  type of change can derived from short_description field.    On y-axis ( average duration ) and on x -axis ( type of change request( short_description) , I have written this query but this is not giving the average duration of change . The result which I am getting is too high , may be its calculating for all the events for same change number . Not sure .   index=servicenow short_description IN ("abc", "xyz", "123") | eval start_date_epoch = strptime(dv_opened_at, "%Y-%m-%d %H:%M:%S"), end_date_epoch = strptime(dv_closed_at, "%Y-%m-%d %H:%M:%S") | eval duration_hours = (end_date_epoch - start_date_epoch ) /3600 | eval avg_duration = round (avg_duration_hours, 0) | stats avg(duration_hours) as avg_duration by change_number, short_description | eventstats avg(avg_duration) as overall_avg_duration by short_description | eval ocb = round (overall_avg_duration ,0) | table short_description, ocb  
Hi, I have the below scenario. please could you help?   spl1: index=abc sourcetype=1.1 source=1.2 "downstream" "executioneid=*"  spl2: index=abc sourcetype=2.1 source=2.2 "do not writ... See more...
Hi, I have the below scenario. please could you help?   spl1: index=abc sourcetype=1.1 source=1.2 "downstream" "executioneid=*"  spl2: index=abc sourcetype=2.1 source=2.2 "do not write to downstream" "executioneid=*" both the spl uses the same index and they have the common field called executionid.  some execution ids are designed not to go to downstream application in the flow. I want to combine these two spl based on the executioneid  
How do I remediate this vulnerability? Tenable 164078  Upgrade Splunk Enterprise or Universal Forwarder to version 9.0 or later.
Hello. I have a problem with Splunk Dashboard Studio table. Sometimes after refreshing the table, when the content is reloaded, column widths become random, some are too wide, some are too narrow, ev... See more...
Hello. I have a problem with Splunk Dashboard Studio table. Sometimes after refreshing the table, when the content is reloaded, column widths become random, some are too wide, some are too narrow, even though, there is a lot of blank space. That makes the content not fit to the table and a scroll bar appears (an example what it looks like can be seen below) It does not happen all the time, only occasionally, I was not able to determine what it depends on. After sorting the table by one of the columns, everything goes back to normal, column widths become even and the content does not overflow anymore (example what the table should look like can be seen below) Note, that I have set a static width for the first column. I have tried removing it but that does not seem to help much. It seems like column widths still get messed up. Does anyone have any suggestions, what could be causing this? I would like to avoid setting static widths for all columns if possible because in some situations, the total number of columns can be different. I am using Splunk Enterprise v9.1.1
I have a sample log file from Apache, now how can I identify it with Splunk that this log is really an Apache log are there a tools or any method for that ?
Hi, Yesterday I upgraded a splunk instance from 8.2.6 to 9.1.2. Afterwards all users that have the role "user" are logging every 10 milliseconds this log: 01-04-2024 08:53:44.220 +0000 INFO Au... See more...
Hi, Yesterday I upgraded a splunk instance from 8.2.6 to 9.1.2. Afterwards all users that have the role "user" are logging every 10 milliseconds this log: 01-04-2024 08:53:44.220 +0000 INFO AuditLogger - Audit:[timestamp=01-04-2024 08:53:44.220, user=test_user, action=admin_all_objects, info=denied ] This issue is filling the index _audit very fast and I had to reduce the index size as a workaround but I doesn't resolve the problem. Have you ever have these problem in your enviroment?