All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, background: I want to create new Splunk environment. i'm still in a learning process - i'm new to splunk. My environment today includes: - 3 indexers - 3 search heads - 1 cluster... See more...
Hello everyone, background: I want to create new Splunk environment. i'm still in a learning process - i'm new to splunk. My environment today includes: - 3 indexers - 3 search heads - 1 cluster master that also serves as a License master - 1 Universal forwarder ** [ All servers are Linux servers ] **   ___________________________________________________________________________ I want to build my Splunk environment in a "Cluster configuration mode":  I want to send data from the universal forwarder to the Cluster master, and from there to my indexers. My main target is to collect logs from different application servers [by sending me syslog or http] in order to monitor their status:  I want to create a unique index for each app: for example, the logs that are sent from an app called app1 will go into an index called "index_app1" ___________________________________________________________________________ I would like to get help with those following questions: 1. How can i check if the cluster master know the universal forwarder? How do I check it? 2. I want to understand how I configure in the "Inputs.conf file" of a my universal forwarder:  I want to allow each app to send logs to uf in different port [in tcp or in udp]:  for example: - Application A will send logs to my universal-forwarder in port 4928 , application B will send logs to my universal-forwarder in port 4929 3. How can I send the messages to the cluster master and to recognize what the correct index which the messages belong:  All messages that which sent from application A will be under index_app1 All messages that which sent from application B will be under index_app2  Thank you for help!
Hi, I'm trying to test out Splunk docker in my Debian server, but I'm unable to get it start after several attempts.         Creating splunk ... error ERROR: for splunk Cannot start service s... See more...
Hi, I'm trying to test out Splunk docker in my Debian server, but I'm unable to get it start after several attempts.         Creating splunk ... error ERROR: for splunk Cannot start service splunk: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/sbin/entrypoint.sh\": stat /sbin/entrypoint.sh: permission denied": unknown ERROR: for splunk Cannot start service splunk: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/sbin/entrypoint.sh\": stat /sbin/entrypoint.sh: permission denied": unknown ERROR: Encountered errors while bringing up the project.         docker-compose file:       version: '3.3' services: splunk: ports: - '8500:8000' environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_PASSWORD='removed' volumes: - /opt/docker/splunk/etc:/opt/splunk/etc - /opt/docker/splunk/var:/opt/splunk/var container_name: splunk image: 'splunk/splunk:8.0'            
Hi Eveyone, Can anyone help me out in this. I have a field name    Request_URL as = https://xyz/api/groups/230df08c/registry. I want to extarct "230df08c" portion from every Request_URL . Can som... See more...
Hi Eveyone, Can anyone help me out in this. I have a field name    Request_URL as = https://xyz/api/groups/230df08c/registry. I want to extarct "230df08c" portion from every Request_URL . Can someone guide me with the regular expression of it in splunk Thanks In advance  
   
Team, I came to know that federated friendship feature where controller cross app correlation works is going to be decommissioned in future releases? If that is the case - i think this is one of th... See more...
Team, I came to know that federated friendship feature where controller cross app correlation works is going to be decommissioned in future releases? If that is the case - i think this is one of the very good feature and should not be decommissioned. Not sure on what basis this has been decided. Please loop the product manager and let them hear our voice of not decommissioning this. Referring this: https://www.appdynamics.com/blog/product/cross-controller-federation-ensuring-application-visibility-and-correlation/ Appreciate help and response. P.S.: Just wanted to make sure that product team knows the importance of functionalities that users are using. Like not sure why "Compare Releases" feature was decommissioned without any user feedback which was really handy and helpful. Regards, VVS
Hi, I registered for free trial in splunk cloud but when try to access the cloud instance it is showing login failed while entering with correct login credentials. Can anyone help on this.
How to get list of apps and their average daily use.
How to enable export option on panel level on dashboard.   
I have some scheduler reports which are sending with email. Is there any additional option while "inline" selected ? I want create or format report like dashboard view but it should be in email body.... See more...
I have some scheduler reports which are sending with email. Is there any additional option while "inline" selected ? I want create or format report like dashboard view but it should be in email body. It is running text base with table view, now.
Hello Team,   I am getting error "Invalid account error when trying to access ES Sandbox instance URL?"   Thanks Lalit
Hey, i have 3 indexes and 3 Search heads. i also have a cluster master server. i'm trying to connect my universal-forwarder in order to send logs from remote servers to the indexers (through the c... See more...
Hey, i have 3 indexes and 3 Search heads. i also have a cluster master server. i'm trying to connect my universal-forwarder in order to send logs from remote servers to the indexers (through the cluster master) how can i to configure the connection between the UF and the clusterMaster? Thanks u for helping! 
Hi Team,   Can you help as I want only selected url's to display in my query output. index=dev_env sourcetype="urldata" URL ="*" LoadTime="*" | timechart span=1m eval(round(avg(LoadTime),0)) as T... See more...
Hi Team,   Can you help as I want only selected url's to display in my query output. index=dev_env sourcetype="urldata" URL ="*" LoadTime="*" | timechart span=1m eval(round(avg(LoadTime),0)) as TimeUsedtoload by URL | fields + _time "https://www.pingtest.com/Logins/Login.aspx?testid=1578&actid=21047" https://www.pingtest.com/*/testing.aspx"   "https://www.othertest.com/Logins/*.aspx" The output includes all the URL's like - _time     https://www.servermonitor/server.aspx?filetype_id=474&mode=new       https://www.pingtest.com/Testdata.aspx     https://www.pingtest.com/Logins/Login.aspx?testid=1578&actid=21047     and_other_multipleurls I want to display only URL's which are like " https://www.pingtest.com/Logins/Login.aspx?testid=1578&actid=21047" " https://www.pingtest.com/Logins/Login.aspx" " https://www.othertest.com/Logins/Login.aspx?testid=1578&" and from above which are having not null values.    
Hi all. I am very new to splunk.. I am using splunk enterprize on my main OS ( windows). I am trying to set up a forwarder to send data etc from KALi and Ubunto VM's.  I am trying to download the fo... See more...
Hi all. I am very new to splunk.. I am using splunk enterprize on my main OS ( windows). I am trying to set up a forwarder to send data etc from KALi and Ubunto VM's.  I am trying to download the forwarder on my windows OS but when I click on DOWNLOAD now.. it just comes up with this blank screen.... rather than starting to download. Any ideas what the issue could be?  
Hi,  I'm new to Splunk. I expect to combine 2 rows like this but dont know how COL1 COL2 VALUE c1 c2 Amy c2 c1 Bob c3 c4 Carol c4 c3 David   Expected answer NEWC3 ... See more...
Hi,  I'm new to Splunk. I expect to combine 2 rows like this but dont know how COL1 COL2 VALUE c1 c2 Amy c2 c1 Bob c3 c4 Carol c4 c3 David   Expected answer NEWC3 VALUE c1 / c2 Amy Bob c3 / c4 Carol David   Thanks
Can anyone help why we are seeing these WARN in logs and how to fix permanently. We are performing manual resync whenever count of events is > 5 in 15mins time range using below query: index=_int... See more...
Can anyone help why we are seeing these WARN in logs and how to fix permanently. We are performing manual resync whenever count of events is > 5 in 15mins time range using below query: index=_internal host=searchhead* component=ConfReplicationThread log_level=WARN "Cannot accept push" |bin span=15m _time|stats max(consecutiveErrors) as count by host,_time|where count>5 LOGS: ========== WARN ConfReplicationThread - Error pushing configurations to captain=https://searchhead01.domain.com:8089, consecutiveErrors=1 msg="Error in acceptPush: Non-200 status_code=400: ConfReplicationException: Cannot accept push with outdated_baseline_op_id=8d89fca5ef4520b00b8ffe8b1366a178b92b52fb; current_baseline_op_id=a948f0e3f0fcae707ce37ca7d7a73" ConfReplicationThread - Error pushing configurations to captain=https://searchhead01.domain.com:8089, consecutiveErrors=1 msg="Error in acceptPush: Non-200 status_code=400: ConfReplicationException: Cannot accept push with outdated_baseline_op_id=66098bdc22c2bcacf951fb104558db365ac64820; current_baseline_op_id=085e675e4c9d8c9fafabee" ConfReplicationThread - Error pulling configurations from captain=https://searchhead01.domain.com:8089, consecutiveErrors=1 msg="Error in fetchFrom, at=a6a747e7138353bd07873f04fe90f2c9b4564567: Network-layer error: Connect Timeout" ConfReplicationThread - Error pulling configurations from captain=https://searchhead01.domain.com:8089, consecutiveErrors=1 msg="Error in fetchFrom, at=a6a747e7138353bd07873f04fe90f2c9b4564567: Network-layer error: Connect Timeout" ConfReplicationThread - Error pulling configurations from captain=https://searchhead01.domain.com:8089, consecutiveErrors=1 msg="Error in fetchFrom, at=a6a747e7138353bd07873f04fe90f2c9b4564567: Network-layer error: Connect Timeout" ============= Even tried to reduce the max_push count to 50 (default is 100). How can we resolve this permanently ? ============== WARN ConfMetrics - single_action=PUSH_TO took wallclock_ms=1525! Consider a lower value of conf_replication_max_push_count in server.conf on all members. WARN ConfMetrics - single_action=PUSH_TO took wallclock_ms=2644! Consider a lower value of conf_replication_max_push_count in server.conf on all members. WARN ConfMetrics - single_action=PULL_FROM took wallclock_ms=2011! Consider a lower value of conf_replication_max_pull_count in server.conf on all members. WARN ConfMetrics - single_action=PULL_FROM took wallclock_ms=1778! Consider a lower value of conf_replication_max_pull_count in server.conf on all members.   Below are the settings in server.conf conf_replication_max_push_count = 50 conf_replication_purge.period = 3h conf_replication_period = 10   we do not want to do resync everytime. splunk resync shcluster-replicated-config
Hi, I am currenlty monitoring ///asbc/logs/*.log and this folder gets updated everyday with a file called myfile_ddmmyyyy.log But since the many a days there is no change or update in the log file... See more...
Hi, I am currenlty monitoring ///asbc/logs/*.log and this folder gets updated everyday with a file called myfile_ddmmyyyy.log But since the many a days there is no change or update in the log file it doesn't get indexed.  I am using crcSalt also but no luck. its not getting indexed. Please help disabled = false index = abc sourcetype = _json crcSalt = <SOURCE>
How to use Regex query to separate servername which has different names eg:-  WSINI601XASI01 WRDNA502XUSA05 WGBR601XGBR11 from below subject lines:— 1.  INC000027679570 | WSINI601XASI01| scom ex... See more...
How to use Regex query to separate servername which has different names eg:-  WSINI601XASI01 WRDNA502XUSA05 WGBR601XGBR11 from below subject lines:— 1.  INC000027679570 | WSINI601XASI01| scom exchange 2k16: Failed to connect to computer 2.  Wo# 1197736/ INC00027697776/ please perform hardware diagnostic on WRDNA502XUSA05
friends, I need help to be able to index this data in Splunk. The date and time format has 03 hours more GMT than my current time, so a log for example that is 10:00 am in the vdd is 07:00 Here i... See more...
friends, I need help to be able to index this data in Splunk. The date and time format has 03 hours more GMT than my current time, so a log for example that is 10:00 am in the vdd is 07:00 Here is an example of a file: See that the beginning of the log indicates the time and day (16), but in fact we are on the 15th from 10:00 (03 hours less), can you help me? I tried to use a standard indexing sourcetype but it didn't work. 16 01:04:59.152) From: <sip:10.56.106.39:5060;transport=tcp>;tag=101cad68-276a380a-13c4-61010-10b8b-55341977-10b8b (16 01:04:59.152) To: <sip:10.55.115.12:5060;transport=tcp>;tag=15616914053140452_local.1595388622658_5585445_5703986 (16 01:04:59.152) Via: SIP/2.0/TCP 10.56.106.39:5060;rport=62175;branch=z9hG4bK-10b8b-4151957-51c18034-fa23450 (16 01:04:59.152) Record-Route: <sip:SM-CTSP-01@10.55.115.12;av-asset-uid=rw-2a64a6b3;lr;transport=TCP> (16 01:04:59.152) Av-Global-Session-ID: 83e241d0-df5c-11ea-b2f4-005056866260 (16 01:04:59.152) Server: AVAYA-SM-7.1.3.4.713406 (16 01:04:59.152) Contact: <sip:10.55.115.12:5060;transport=tcp> (16 01:04:59.152) Content-Length: 0 (16 01:04:59.152) (16 01:04:59.152) (16 01:04:59.152) <<<<< received: SIP/2.0 200 OK (16 01:04:59.152) Call-ID: 101caf18-276a380a-13c4-61010-10b8b-40841bfc-10b8b (16 01:04:59.152) CSeq: 460658901 OPTIONS (16 01:04:59.152) From: <sip:10.56.106.39:5060;transport=tcp>;tag=101caf18-276a380a-13c4-61010-10b8b-490db803-10b8b (16 01:04:59.152) To: <sip:10.55.115.14:5060;transport=tcp>;tag=9708321523057227_local.1595385862537_5573806_5690910 (16 01:04:59.152) Via: SIP/2.0/TCP 10.56.106.39:5060;rport=62176;branch=z9hG4bK-10b8b-4151957-2000a9d0-fa235d0 (16 01:04:59.152) Record-Route: <sip:SM-CTSP-02@10.55.115.14;av-asset-uid=rw-338e8404;lr;transport=TCP> (16 01:04:59.152) Av-Global-Session-ID: 83e2b700-df5c-11ea-a9a2-00505686cdac (16 01:04:59.152) Server: AVAYA-SM-7.1.3.4.713406 (16 01:04:59.152) Contact: <sip:10.55.115.14:5060;transport=tcp> (16 01:04:59.152) Content-Length: 0 (16 01:04:59.152) (16 01:04:59.152) (16 01:05:12.319) <<<<< received: OPTIONS sip:10.56.106.39 SIP/2.0 (16 01:05:12.319) Via: SIP/2.0/UDP 10.225.86.50:5060;branch=z9hG4bK24d305d2 (16 01:05:12.319) Max-Forwards: 70 (16 01:05:12.319) From: "asterisk" <sip:asterisk@10.225.86.50>;tag=as232a75bf (16 01:05:12.319) To: <sip:10.56.106.39> (16 01:05:12.319) Contact: <sip:asterisk@10.225.86.50:5060> (16 01:05:12.319) Call-ID: 0e70a5de2b61b2c801b75e171774e189@10.225.86.50:5060
Is it possible to set the time range picker (the one to the right of the search bar) as part of the query I enter in the search bar?
Hi all, My splunk itsi instance is connected to snow. All the episodes are Triggering with count values greater than 10 Issue:- for each new episode count on splunk(splunk number) multiple tickets ... See more...
Hi all, My splunk itsi instance is connected to snow. All the episodes are Triggering with count values greater than 10 Issue:- for each new episode count on splunk(splunk number) multiple tickets are getting created all with same unique snow incdient no. But with different splunk number.   Please help where is the issue?   Is there any way we can suppress this with rules on itsi.?