All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How do I search for specific events in Splunk?
How to optimize search performance?
How can I configure data inputs in Splunk?
What are the system requirements for installing Splunk?
How to manage users and roles?
How to manage Splunk licenses?
How do I install Splunk on Windows?
Hi, I have two searches,.. First search which will run once per day lookback -24h@h , latest=now cron: 5 4 * * * and writes the results to summary index.  my base search ... ... | collect inde... See more...
Hi, I have two searches,.. First search which will run once per day lookback -24h@h , latest=now cron: 5 4 * * * and writes the results to summary index.  my base search ... ... | collect index=summary source="base generator" Second search will also run once per day lookback -24h@h and latest=now cron: 5 6 * * * my base search |join type=left field1 field 2 [ search index=summary source="*base generator*"..... ] Now I have the results as expected something line this Field1                 field2                     _time UserA                 list of names       30/5/2023 9:30 UserA                 list of names       30/5/2023 9:40 (both field1 and field2 are same as in first row but different time values) In this case, I need a consolidated event count ... Say for above scenario, my count of events should be 2 based on the field1 and field2 irrespective of the _time field.. I tried, but no luck.. any help would be much appreciated.. thanks in advance!
so I created a field like so:  |eval message_id=AREA.SUBID | stats count as "Number of message_id" by message_id | sort 10 - "Number of message_id" This gives me a column chart with the messa... See more...
so I created a field like so:  |eval message_id=AREA.SUBID | stats count as "Number of message_id" by message_id | sort 10 - "Number of message_id" This gives me a column chart with the message_id on the X axis and count on Y axis. With the drilldown setting shown in the pictures below, i'm trying  to make that the following query  indicate details only for the specific message_id I am cliking on the column chart  |eval message_id=AREA.SUBID | rename TEXT as Text, ICON as Priority, USER as User | stats count by User, Text, Priority |where message_id="$mess_id2$" |sort - count Yet I get the "no results found" message. I know this is related to the fact that the field is "artificial" but I can't find a way to fix it
Hi Folks,  Can we ingest logs from Azure Log Analytics into Splunk through event hooks? Thanks
We have an Splunk architecture with about 7 indexers,  3 search heads, 2 Heavy forwarders and a deployment server. We want to stop further data ingestion permanently but keep the servers up for searc... See more...
We have an Splunk architecture with about 7 indexers,  3 search heads, 2 Heavy forwarders and a deployment server. We want to stop further data ingestion permanently but keep the servers up for searching historical logs.  Can you please advice two or more methods to do so.  Thanks in advance.
Hi All, i am trying to display the data in the region wise Map based on the stats count and saving as choropleth Map but not workin    
I am rather new to Splunk so far having come from previously using Event Sentry for a small offline network of VM based systems on a VDI. Simply put, our move to Splunk was in order to incorporate th... See more...
I am rather new to Splunk so far having come from previously using Event Sentry for a small offline network of VM based systems on a VDI. Simply put, our move to Splunk was in order to incorporate the logging of Linux systems soon to come as well. So far I have opted for my company to get a single 1 GB/day license since the current configuration in Event Sentry that I use to capture event logs from the Windows systems generates about a half a Gig a day. So I figured Splunk would be pretty similar in its data collection if I am opting to collect the same things. Come to actually stand the server up and try to add my first few servers in data sets and come to find that these few servers with only the 3 Event logs I care about (System, Security, Application) in addition to the Splunk server itself have basically completely tapped out my 1 GB/day limit. Am I missing some crucial configuration component here or did I insanely underestimate the collection that would happen here? Realistically I should have tried this out probably prior to going for the licensed route but I thought the collection would be akin to what I have seen before.   Any details or assistance in finding resources about this stuff would be great. As it stands I have been searching for the details on what all is captured but am coming up with not much.
The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "The server selected protocol version TLS10 is not accepted by client preferenc... See more...
The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "The server selected protocol version TLS10 is not accepted by client preferences [TLS13, TLS12]".  How can i make this work?
Hello Everyone, This is the extension of previous query which I posted- https://community.splunk.com/t5/Splunk-Search/How-would-I-write-a-Splunk-search-to-build-a-table-for-PASS-and/m-p/644830#M223... See more...
Hello Everyone, This is the extension of previous query which I posted- https://community.splunk.com/t5/Splunk-Search/How-would-I-write-a-Splunk-search-to-build-a-table-for-PASS-and/m-p/644830#M223292 Thanks to @ITWhisperer  and updated query worked well. Now I am trying to add percentage column for PASS and FAIL, instead of the count: _time success fail 2023-05-28 03:00 98 2 2023-05-28 04:00 60 40 2023-05-28 05:00 100 0   I was trying to build a query, something like this:     index=my_index sourcetype=openshift_logs openshift_namespace=my_ns openshift_cluster="cluster009" ("message.statusCode"=2* OR "message.statusCode"=4*) | eval status=if('message.statusCode'>300,"fail","success") | search "message.logType"=CLIENT_RES | search "message.url"="/shopping/carts/*" ```| timechart span=1h dc("message.tracers.ek-correlation-id{}") as count by status``` | eventstats count("message.tracers.ek-correlation-id{}") as totalCount | eventstats count("message.tracers.ek-correlation-id{}") as individualCount by status | eval percent=(individualCount/totalCount)*100     I know the above query is incomplete and not sure if this the right way to proceed.
Hi, I created a custom app for my company and I would like to create a menu with shortcuts to the most valuable dashboards of other apps installed. In this case, I was trying to link a dashboard fr... See more...
Hi, I created a custom app for my company and I would like to create a menu with shortcuts to the most valuable dashboards of other apps installed. In this case, I was trying to link a dashboard from Azure App https://splunk.xxxx/en-US/app/microsoft_azure/aad_signin I tried adding an entry in the file C:\Program Files\Splunk\etc\apps\MYAPP\local\data\ui\nav\default.xml   <nav search_view="search"> <view name="search" default='true' /> <view name="disk_monitoring" /> <view name="analytics_workspace" /> <view name="datasets" /> <view name="reports" /> <view name="alerts" /> <view name="dashboards" /> <view name="azure_sign-in" /> </nav> and I also created a view file (azure_sign-in.xml) in the path C:\Program Files\Splunk\etc\apps\MYAPP\local\data\ui\views <shortcut> <label>Azure Sign-ins</label> <url>/en-US/app/microsoft_azure/aad_signin</url> </shortcut>   After restarting Splunk, the menu's new option appears but nothing is shown when I click on it. I looked for the view and change their permissions to All Apps, but still, nothing is shown. what am I missing? Is this the right way to do it?  
index="*"  tag=fw action=blocked | stats values(dest) as dest by src | eval dest = dest | where dest > 10
Hello, I've tried parsing my Radius log files using this tutorial :   https://fraserclark926577729.wordpress.com/2019/12/18/monitoring-windows-nps-logs-with-splunk/  So i created my app in "/opt/... See more...
Hello, I've tried parsing my Radius log files using this tutorial :   https://fraserclark926577729.wordpress.com/2019/12/18/monitoring-windows-nps-logs-with-splunk/  So i created my app in "/opt/splunk/etc/deployment-apps/nps_monitor" and in "/opt/splunk/etc/deployment-apps/nps_monitor/local" my 3 files : app.conf:     # # Splunk app configuration file # [install] is_configured = 0 [ui] is_visible = 1 label = nps_monitor [launcher] author = NW description = version = 1.0.0     props.conf:     [ias] SHOULD_LINEMERGE = false KV_MODE = NONE INDEXED_EXTRACTIONS = CSV # The type of file that Splunk software should expect for a given sourcetype, and the extraction and/or parsing method that should be used on the file. # This setting tells Splunk to specify the header field names directly FIELD_NAMES = ComputerName,ServiceName,Record_Date,Record_Time,Packet_Type,User_Name,Fully_Qualified_Distinguished_Name, Called_Station_ID,Calling_Station_ID,Callback_Number,Framed_IP_Address,NAS_Identifier,NAS_IP_Address,NAS_Port,Client_Vendor,Client_IP_Address, Client_Friendly_Name,Event_Timestamp,Port_Limit,NAS_Port_Type,Connect_Info,Framed_Protocol,Service_Type,Authentication_Type,Policy_Name,Reason_Code, Class,Session_Timeout,Idle_Timeout,Termination_Action,EAP_Friendly_Name,Acct_Status_Type,Acct_Delay_Time,Acct_Input_Octets,Acct_Output_Octets,Acct_Session_Id, Acct_Authentic,Acct_Session_Time,Acct_Input_Packets,Acct_Output_Packets,Acct_Terminate_Cause,Acct_Multi_Ssn_ID,Acct_Link_Count,Acct_Interim_Interval, Tunnel_Type,Tunnel_Medium_Type,Tunnel_Client_Endpt,Tunnel_Server_Endpt,Acct_Tunnel_Conn,Tunnel_Pvt_Group_ID,Tunnel_Assignment_ID,Tunnel_Preference, MS_Acct_Auth_Type,MS_Acct_EAP_Type,MS_RAS_Version,MS_RAS_Vendor,MS_CHAP_Error,MS_CHAP_Domain,MS_MPPE_Encryption_Types,MS_MPPE_Encryption_Policy, Proxy_Policy_Name,Provider_Type,Provider_Name,Remote_Server_Address,MS_RAS_Client_Name,MS_RAS_Client_Version TIME_FORMAT = %m/%d/%Y%n%H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 20 TIMESTAMP_FIELDS = Record_Date,Record_Time DATETIME_CONFIG = NO_BINARY_CHECK = true disabled = false pulldown_type = true     inputs.conf:     [monitor://C:\NPS-Log\IN*.log] sourcetype = ias index = radius disabled = 0     i've deployed my app on my server :  I can see the app on my server folder too but no data are coming into my "radius" index. Did I miss something ? Thanks for your help
Hey,  I can see the newly added field extraction regex in my field extraction page. But, the same field is not available in the Search page     
This happen sometime on this DB and the only one that I experience this. The view, fields and value are pretty simple, my input also. But sometime it don't index certain fields that make it difficult... See more...
This happen sometime on this DB and the only one that I experience this. The view, fields and value are pretty simple, my input also. But sometime it don't index certain fields that make it difficult to make statistical report. I checked the view and they certainly have those fields and value but the events are lacking them Another weird thing is some events are duplicate and some aren't. The events that are not duplicate is the one lacking fields. I never seen this before and don't know how to resovle this.