All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Has anyone forwarded Cisco Finesse logs to Splunk Cloud? If yes, it would be great if they can share the steps to do the same.
Hi, I am trying to setup HIVE connection using Splunk DB connect and stuck on Kereberos authentication. I have setup db_connection_types.conf and added cloudera drivers for HIVE DB.  While setting u... See more...
Hi, I am trying to setup HIVE connection using Splunk DB connect and stuck on Kereberos authentication. I have setup db_connection_types.conf and added cloudera drivers for HIVE DB.  While setting up connection I am using below jdbc URL (as per clodera documentation) and getting Kerberos authentication error. jdbc:hive2://<host>:10000/<db_name>;principal=hive/<my_kdc_principal_name>;AuthMech=1; KrbRealm=<kdc realm> ;KrbHostFQDN=<kdc host fqdn>; KrbServiceName=hive;KrbAuthType=2;useSSL=1 ERROR:  [Cloudera][HiveJDBCDriver](500168) Error creating login context using ticket cache: Unable to obtain Principal Name for authentication . If I use below jdbc URL (as per other answers from Splunk Community), I am getting different KDC authentication error  jdbc:hive2://<host>:10000/<db_name>;principal=hive/<principal_name>;useSSL=1 ERROR:  [Cloudera][HiveJDBCDriver](500164) Error initialized or created transport for authentication: Invalid status 21.  Kindly help! Thank you.
I am attempting to use the map command and table the data. I am trying to map in values to run through the a predict function and table the results. I get 90% there except the fields that are predict... See more...
I am attempting to use the map command and table the data. I am trying to map in values to run through the a predict function and table the results. I get 90% there except the fields that are predicted do not populate the op value.   <my search> | table op |map [search op=$op$ <more search stuff> ] ... | timechart count as Vol, values(op) as op | predict Vol   When doing this I get a table with multiple times, once for each op like the below: _time Vol op high(prediction(Vol)) low(prediction(Vol)) 12:00 10 test1 12 7 12:00 11 test2 14 8 12:15 15 test1 17 11 12:15 12 test2 15 10 12:30(predicted time)     16 10 12:30(predicted time)     12 9   Any ideas why this is no populating?
Hello, I have a free version of the machine agent running on CentOS configured with the SaaS version of the cloud controller. The agent starts and connects to the controller. I also see the server li... See more...
Hello, I have a free version of the machine agent running on CentOS configured with the SaaS version of the cloud controller. The agent starts and connects to the controller. I also see the server listed on the controller. However, I don't any metrics visible under the Metric Browser. Can you help me troubleshoot this Machine agent version: 20.10.0.2813 CentOS 7.8 Controller: https://[Redacted].saas.appdynamics.com/  Here is some observation that may / may not be related. I will provide config/logs if required. 1) When I reload the controller, sometimes, I see "500 Internal Server Error" reported. 2) I see this in the machine-agent.log [ExtensionStarter-DockerMonitoring] 04 Nov 2020 14:39:29,269 ERROR CGroupFileSystemRootProvider - Could not find CGroup files in following path(s) : [/sys/fs/cgroup, /cgroup] 3) I see this in analytics-agent.log [2020-11-04T14:54:02,464Z] [ERROR] [analytics-agent-sync-thread-0] [c.a.a.agent.sync.ErrorMessageHelper] Analytics agent failed to connect to the controller registration endpoint. This can happen when the connection is refused remotely, or when there is no process listening on the remote address/port. Please check to see if the controller address is configured correctly and is running. 4) I have server-monitoring disabled, I wasn't able to get the agent started with this enabled. ( <sim-enabled>false</sim-enabled> in controller-info.xml) Thanks nram ^ Edited by @Ryan.Paredez to redact Controller URL. Please do not share Controller URLs on Community posts for security and privacy reasons.
I am trying to adjust the dashboard to show the query that is being generated by a list of drop downs. I think this will help the team better understand how to build queries in the future. If you hav... See more...
I am trying to adjust the dashboard to show the query that is being generated by a list of drop downs. I think this will help the team better understand how to build queries in the future. If you have the html source that points towards this that would be greatly appreciated.   
Hi, We are actually in the 7.3.5 Enterpreise and 5.3.1 ES . Could someone help to identify what are the next stable version? Regards,
Hi team, I am extracting JSON data using spath. There is done datetime field and which is coming with zone name as below.   I want it without UTC word. like 2020-11-03 10:10:10  Can we do thi... See more...
Hi team, I am extracting JSON data using spath. There is done datetime field and which is coming with zone name as below.   I want it without UTC word. like 2020-11-03 10:10:10  Can we do this using spath. if yes please provide me example.   thank you for your help. Sanket K  
i am a beginner. I plan to make a visualization on the dashboard based on firewall log data. Are there any visualization recommendations I make?
Hi,  I am beginner at splunk and wondering if there is a test log file somewhere that I can get to get to know more about splunk? I already have a Splunk test system but no data to test.   Thanks ... See more...
Hi,  I am beginner at splunk and wondering if there is a test log file somewhere that I can get to get to know more about splunk? I already have a Splunk test system but no data to test.   Thanks in advance! Br AR
Hello,   I have following security log entries: *********************************************************************************** ****** SECURITY WARNING ... See more...
Hello,   I have following security log entries: *********************************************************************************** ****** SECURITY WARNING ****** *********************************************************************************** Wed Nov 4 04:39:25 2020 Error: Permission denied (-13), Access denied [http_rewrite.c 4012] CONNECTION (id=2738/2739): used: 1, type: default, role: Server(1), stateful: 0 nihdl: -1, ssl: (nil), protocol: HTTPS(2) local host: XXX:42217 () remote host: XXX:443 () - (-) system: proxy prot local host: XXX:443 own remote host: XXX:35636 [Thr 140203996280576] Address Offset REQUEST: [Thr 140203996280576] ------------------------------------------------------------------------ [Thr 140203996280576] 7f83d21ec910 000000 47455420 2f666176 69636f6e 2e69636f |GET /favicon.ico| [Thr 140203996280576] 7f83d21ec920 000016 20485454 502f312e 310d0a68 6f73743a | HTTP/1.1..host:| .... .... [Thr 140203996280576] 7f83d21ecce0 000976 702d7561 2d70726f 746f636f 6c3a2068 |p-ua-protocol: h| [Thr 140203996280576] 7f83d21eccf0 000992 74747073 0d0a0d0a |ttps.... | [Thr 140203996280576] ------------------------------------------------------------------------ ***********************************************************************************   Then they repeat in the above format. How and where (which config file) would I set the correct line breaking and event time setting? Kind Regards, Kamil 
How do I check if a device has universal forwarder installed and if so how do I troubleshoot while it's not sending data to the indexer.
As per the below screenshot, when i used to select any host from the dropdown, i want to hide first four panel and other needs to be work. How can i do it by using xml.    
Looking for an search query to monitor some bunch of users on all indexes activity. Tried the below one but couldn't get my actual requirement , let me know some more efficient queries to get this. ... See more...
Looking for an search query to monitor some bunch of users on all indexes activity. Tried the below one but couldn't get my actual requirement , let me know some more efficient queries to get this. index=* user= apptusr user=oracleapp user=oracledb user=oracleftp |stats count by src dest user name action index
Hi Everyone, i want to parse the below custom Application logs, Need your help and advises. 12084( 14140) 11/02/2020 15:39:09 RTE I Base login Response: 0.999 -- RAD: 0.000 JS: 0.313 Log:0.000 ... See more...
Hi Everyone, i want to parse the below custom Application logs, Need your help and advises. 12084( 14140) 11/02/2020 15:39:09 RTE I Base login Response: 0.999 -- RAD: 0.000 JS: 0.313 Log:0.000 Database: 0.686(00910) LDAP: 0.000 LoadBalancer: 0.000 (CPU 0.171) application:login,cleanup 12084( 14140) 11/02/2020 15:39:09 RTE I -Memory : S(4638608) O(809484) MAX(5448092) - MALLOC's Total(143004) 12084( 14140) 11/02/2020 15:39:08 RTE I User integration has logged in and is using a Named license ( 17 out of a maximum 50 ) 12084( 14140) 11/02/2020 15:39:08 JRTE I GUID=b2125754-dcca-41a2-846f-f7783841fd8e 12084( 14140) 11/02/2020 15:39:08 RTE I SQL Server default schema is dbo 12084( 14140) 11/02/2020 15:39:08 RTE I MS SQL Server collation 'Arabic_100_CI_AS', varchar codepage 1256, comparison 196609: case insensitive, accent sensitive 12084( 14140) 11/02/2020 15:39:08 RTE I Connected to Data source 'SM' SQL server 'JUSTQQ-HPSQL01' version: 12.0.6329 through SQL driver version: 10.0.14393 using database 'SMPP' as user 'dbo' 12084( 14140) 11/02/2020 15:39:08 RTE I Connection established to dbtype 'sqlserver' database 'SM' user 'sm' 12084( 14140) 11/02/2020 15:39:08 RTE I API=SQLConnect 12084( 14140) 11/02/2020 15:39:08 RTE I Info: SQL State: 01000-5703 Message: [Microsoft][ODBC SQL Server Driver][SQL Server]Changed language setting to us_english. 12084( 14140) 11/02/2020 15:39:08 RTE I Info: SQL State: 01000-5701 Message: [Microsoft][ODBC SQL Server Driver][SQL Server]Changed database context to 'SMAPP'. 12084( 11436) 11/02/2020 15:39:08 JRTE I Webservice API session - Thread ID: 7C5ACF86B350A4A66FA0B58E083; Client IP: 192.168.1.1; session timeout: 45 seconds 12084( 14140) 11/02/2020 15:39:08 RTE I Total sessions since process began: 53144 12084( 14140) 11/02/2020 15:39:08 RTE I Thread 7C5ACF86B350A4A66FA795130B58E083 initialization done. Thread 1 of 50. 12084( 14140) 11/02/2020 15:39:08 RTE I Thread attaching to resources with key 0x61E13C00 12084( 14140) 11/02/2020 15:39:08 RTE I Host network address: 10.10.1.1 12084( 14140) 11/02/2020 15:39:08 RTE I Process sm 9.64.1003 (P1) System: 14080 (0x61E13C00) on PC (x64 64-bit) running Windows (6.2 Build 9200) Timezone GMT+03:00 Locale en_US from JUSTQR-SM01 12084( 14140) 11/02/2020 15:39:08 RTE I Using "utalloc" memory manager, mode [0] 12084( 11436) 11/02/2020 15:39:08 JRTE I Creating new worker thread 7C5ACF86B350A4A66FA0B58E083 t@52 9048( 19280) 11/02/2020 15:39:08 RAD I [INFO][urlCreator]: ====>https://xxxx.domain.com/src?ctx=docEngine&file=probsummary&query=number%3%22IM363572%22&action=&title=%D8%A7%D9%D8%AD%D8%AF%D8%AB%20IM572&queryHash=e33d06a15491712affa08b149ae19d699f58d665bd1e1abf3d53bfbe9 15984( 20140) 11/02/2020 15:39:07 RAD I [INFO][urlCreator]: ====>https://xxxx.domain.com/src?ctx=docEngine&file=probsummary&query=number%3%22IM363572%22&action=&title=%D8%A7%D9%D8%D8%AF%D8%AB%20IM572&queryHash=e33d06a15491712affa08b149ae19d699f58d665bd1e1abf3d53bfbe9/src?ctx=docEngine&file=probsummary&query=number%3D%22IM363588%22&action=&title=Incident%20IM363588&queryHash=2a2036befaa7c31d4c9b2ca74cbfa5535994125ecf77037a490cf55 15984( 20140) 11/02/2020 15:39:06 RAD I [INFO][urlCreator]: ====>https://xxxx.domain.com/src?ctx=docEngine&file=probsummary&query=number%3%22IM363572%22&action=&title=%D8%A7%D9%D8%AF%D8%AB%20IM572&queryHash=e33d06a15491712affa08b149ae19d699f58d665bd1e1abf3d53bfbe9/src?ctx=docEngine&file=probsummary&query=number%322IM363588%22&action=&title=Incident%20IM363588&queryHash=2a2036befaa7c31d4c9b2ca74cbfa5535994125ecf77037a4955 7440( 11732) 11/02/2020 15:39:05 RTE A Performance-7-$G.imAreas, Globallist $G.imAreas contains too many items! num=705 ; application(display), panel(show.rio) 7440( 11732) 11/02/2020 15:39:05 RTE A Performance-7-$G.imAreas.local, Globallist $G.imAreas.local contains too many items! num=705 ; application(display), panel(show.rio) 7440( 11732) 11/02/2020 15:39:05 RTE I -Memory : S(12882272) O(3118468) MAX(16000740) - MALLOC's Total(781539) 6984( 192) 11/02/2020 15:39:05 RAD I [INFO][urlCreator]: ====>https://xxxx.domain.com/src?ctx=docEngine&file=probsummary&query=number%3%22IM363572%22&action=&title=%D8%A7%D9%D8%D8%AF%D8%AB%20IM572&queryHash=e33d06a15491712affa08b149ae19d699f58d665bd1e1abf3d53bfbe9/src?ctx=docEngine&file=probsummary&query=number%3D%22IM362922%22&action=&title=Incident%20IM362922&queryHash=e06d549b1934cd837c5e5cd261d3003ca5f9281376f3251b72  
Hey there. I have a dashboard with a search query: Index=my index TestRunId="$RunId$" | dedup TestName | eval Status=case(Outcome==0, "Failed, Outcome==1, "Passed") | table TestName I want to keep ... See more...
Hey there. I have a dashboard with a search query: Index=my index TestRunId="$RunId$" | dedup TestName | eval Status=case(Outcome==0, "Failed, Outcome==1, "Passed") | table TestName I want to keep the Outcome per row in a token to enable coloring the test name with his outcome. How can I do it? Or does there is a way to table both the TestName and the Outcome but not show the outcome, and color the row based the outcome value?   Thanks.
When UF will be stopped ,data wont be indexed. But once the UF is up and running will it forward the old data/missed data  when UF was down? I wanted to understand if the events/logs present during t... See more...
When UF will be stopped ,data wont be indexed. But once the UF is up and running will it forward the old data/missed data  when UF was down? I wanted to understand if the events/logs present during the downtime of UF are still forwarded to indexers once the UF starts running.   Thank you
Hello Splunkers, I need to filter logs at HF to send  only single log from each source from every host once in a day  to the indexer A. And all the logs will be forwarded to indexer B where indexer ... See more...
Hello Splunkers, I need to filter logs at HF to send  only single log from each source from every host once in a day  to the indexer A. And all the logs will be forwarded to indexer B where indexer B is a customer indexers. And hence we won't have access to indexer B. Indexer A is what we own, we need to use it for logs validation whether any log of certain appn is showing up or not at every day. This is part of our logs generating validation that we are asked by our customers.
Hi, Here is my query: | search SRCreateRequest Completed | stats count as CreateSR | appendcols [search SRUpdateRequest Completed | stats count as UpdateSR] | appendcols [search SRPublishRequest ... See more...
Hi, Here is my query: | search SRCreateRequest Completed | stats count as CreateSR | appendcols [search SRUpdateRequest Completed | stats count as UpdateSR] | appendcols [search SRPublishRequest Completed | stats count as PublishSR] | transpose header_field=a | appendcols [search SRCreateRequest ERROR | stats count as Failure] | append [search SRUpdateRequest ERROR | stats count as Failure] | append [search RPublishRequest ERROR | stats count as Failure] | appendcols [search SRCreateRequest response | stats count as Response] | append [search SRUpdateRequest response | stats count as Response] | append [search RPublishRequest response | stats count as Response] | rename "column" as "API", "row 1" as "Success" | table API,Success,Failure,Response     Output is not coming in to proper table.. any suggestion  
While upgrading my indexers from 7.0 to 8.0 the data disk migration for hotwarm, cold and thawed db is failing with message: homePath='/var/opt/splunk/db/_audit' of index=_audit on unusable filesyst... See more...
While upgrading my indexers from 7.0 to 8.0 the data disk migration for hotwarm, cold and thawed db is failing with message: homePath='/var/opt/splunk/db/_audit' of index=_audit on unusable filesystem. Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue Error running pre-start tasks The mount command output shows the disk being rw /dev/xvde on /var/opt/splunk/thaweddb type ext4 (rw,relatime,seclabel,data=ordered) /dev/xvdd on /var/opt/splunk/colddb type ext4 (rw,relatime,seclabel,data=ordered) /dev/xvdc on /var/opt/splunk/db type ext4 (rw,relatime,seclabel,data=ordered)   Also fdisk -l command show these disk are attached ok as well. I have tried implementing the bypass OPTIMISTIC_ABOUT_FILE_LOCKING=1, that too does not works. Do you have any way go for this?
I want colour coding , just like"  If Field= 90 to 95 then colour Red If 80 to 85 i want orange colour If 70 to 75 then i want yellow colour   How could i do this in visualization.   ... See more...
I want colour coding , just like"  If Field= 90 to 95 then colour Red If 80 to 85 i want orange colour If 70 to 75 then i want yellow colour   How could i do this in visualization.