All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

<table> <title>Hot Ports (ADTrans/hour)</title> <search> <query>index=abc source=port | rename port.port as Port | stats sum(port.code.557) as Tcount by Port | table Port Tcount | sort -Tcount | h... See more...
<table> <title>Hot Ports (ADTrans/hour)</title> <search> <query>index=abc source=port | rename port.port as Port | stats sum(port.code.557) as Tcount by Port | table Port Tcount | sort -Tcount | head 10</query> <earliest>-1h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> <panel id="PortGbytes"> <html> <style> table tbody tr th td { font-size: 75% !important; padding: 0px 1px !important;} .dashboard-panel .panel-head h3 { padding: 1px 1px 1px 1px !important; font-size: 10px !important; } #PortGbytes{ height: 250px !important; width: 10% !important } </style> </html> <table> <title>Hot Ports (GB/hour)</title> <search> <query>index=abc source=port | rename port.port as Port | stats sum(port.xfer_bytes) as Xbytes by Port | eval Gbytes=round(Xbytes/(1024*1024*1024),2) | table Port Gbytes | sort -Gbytes | head 10</query> <earliest>-1h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> <panel id="PortPeakClients"> <html> <style> #PortPeakClients{ height: 250px !important; width: 10% !important } table tbody tr th td { font-size: 75% !important; padding: 0px 1px !important;} .dashboard-panel .panel-head h3 { padding: 1px 1px 1px 1px !important; font-size: 10px !important; } </style> </html> <table> <title>Hot Ports (Peak Clients)</title> <search> <query>index=abc source=port | rename port.port as Port | stats max(port.numclients) as PeakClients by Port | table Port PeakClients | sort -PeakClients | head 10</query> <earliest>-1h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row>
So here is the image of dashboard  There are 4 tables and in all 4 no headers  Sample code for 2 tables  <table> <title>Hot Ports (ADTrans/hour)</title> <search> <query>index=abc source=p... See more...
So here is the image of dashboard  There are 4 tables and in all 4 no headers  Sample code for 2 tables  <table> <title>Hot Ports (ADTrans/hour)</title> <search> <query>index=abc source=port | rename port.port as Port | stats sum(port.code.557) as Tcount by Port | table Port Tcount | sort -Tcount | head 10</query> <earliest>-1h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> <panel id="PortGbytes"> <html> <style> table tbody tr th td { font-size: 75% !important; padding: 0px 1px !important;} .dashboard-panel .panel-head h3 { padding: 1px 1px 1px 1px !important; font-size: 10px !important; } #PortGbytes{ height: 250px !important; width: 10% !important } </style> </html> <table> <title>Hot Ports (GB/hour)</title> <search> <query>index=abc source=port | rename port.port as Port | stats sum(port.xfer_bytes) as Xbytes by Port | eval Gbytes=round(Xbytes/(1024*1024*1024),2) | table Port Gbytes | sort -Gbytes | head 10</query> <earliest>-1h</earliest>
Please share the source for your dashboard panel
Hi @sandeep_A1997  Pls check the bucket status - indexer clustering > Indexes > Bucket Status Pls update us if you have any bucket issues...    Some docs links: https://help.splunk.com/en/splunk... See more...
Hi @sandeep_A1997  Pls check the bucket status - indexer clustering > Indexes > Bucket Status Pls update us if you have any bucket issues...    Some docs links: https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.4/troubleshoot-indexers-and-clusters-of-indexers/bucket-replication-issues https://splunk.my.site.com/customer/s/article/SF-and-RF-is-not-met-on-Cluster-Manager  
Hey @rishabhpatel20, Can you share the dashboard source code here to understand why the headers are not visible? Also, a clear screenshot from dashboard that shows the header is missing. The second ... See more...
Hey @rishabhpatel20, Can you share the dashboard source code here to understand why the headers are not visible? Also, a clear screenshot from dashboard that shows the header is missing. The second screenshot displays fields like Hot Ports, and Trans/Hour. If those are not the headers, what are you expecting? Thanks, Tejas.
I am creating a query and when I see the result I see proper table with headers , but saving it to existing dashboard , it is just displaying the content without headers. I tried to expand the table ... See more...
I am creating a query and when I see the result I see proper table with headers , but saving it to existing dashboard , it is just displaying the content without headers. I tried to expand the table size as well.    index=abc source=port | rename port.port as Port | stats sum(port.code.557) as Tcount by Port | sort -Tcount | head 10 | table Port Tcount Search result    Below image is from dashboard pannel , no headings     
Suddenly we observed /opt/data was unmounted, and ownership has changed from splunk to root. Mounted back and restarted the service. still SF and RF are not meeting up. Restarted the service from AWS... See more...
Suddenly we observed /opt/data was unmounted, and ownership has changed from splunk to root. Mounted back and restarted the service. still SF and RF are not meeting up. Restarted the service from AWS, still no response, we have 3 indexers placed in this cluster. tried rollingg restart for remaining indexers, when i restarted the second indexer, the splunk stopped and /opt/data  ownership changed and unmounted, mounted them again same happend with 1st indexer too, didnot touched 3rd indexer. Now amoung 3 indexer 2 were down restarted then and started splunk in them and mounted /opt/data too, still we are not able to see SF and RF are meeting.
Hi @Mirza_Jaffar1  There is no mention of SSL in the error logs so I am leaning towards an issue with the pass4SymmKey or another encrypted credential. Have you recently made any changes or installe... See more...
Hi @Mirza_Jaffar1  There is no mention of SSL in the error logs so I am leaning towards an issue with the pass4SymmKey or another encrypted credential. Have you recently made any changes or installed any apps? If you copied a local directory from another instance that contained encrypted credentials then this instance will be unable to decrypt them, this is because Splunk encrypts credentials based on its own splunk.secret file  You can verify encrypted keys such as pass4SymmKey by using: $SPLUNK_HOME/bin/splunk show-decrypted --value '<value>'   When using this you need to change the $ -> \$ otherwise Linux will think this is a variable. for example $7$abc -> \$7\$abc Please let us know what your architecture is like, e.g. what instance is this within your architecture and if you made any recent changes.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello @gabriele_chini, Can you provide the code you use for generating the token and do you save it in kvstore? How long does the token stay active and do you regenerate the token if it has already ... See more...
Hello @gabriele_chini, Can you provide the code you use for generating the token and do you save it in kvstore? How long does the token stay active and do you regenerate the token if it has already been expired? Thanks, Tejas. 
Try (temporarily) adding a new panel to see what your users are getting back from the saved search and whether there are any errors <row> <panel> <title>Operational times</title> <... See more...
Try (temporarily) adding a new panel to see what your users are getting back from the saved search and whether there are any errors <row> <panel> <title>Operational times</title> <table> <search> <query>| savedsearch set_operational_hours</query> <earliest>0</earliest> <latest>now</latest> </search> </table> </panel> </row> Moving the search may not help if the users' role does not allow them to successfully execute the savedsearch. Please check the permissions (as I said earlier).
Hey @danielbb, While creating architecture diagrams, I used to go for config file icon only for any of the conf files  i.e. props, transforms, server, etc. Yes, for inputs there are multiple icons s... See more...
Hey @danielbb, While creating architecture diagrams, I used to go for config file icon only for any of the conf files  i.e. props, transforms, server, etc. Yes, for inputs there are multiple icons supported i.e. monitor input, API input, etc.  I haven't come across any specific stencils for props/transforms. Thanks, Tejas. 
Thanks, Im wondering if its a permissions issue. The details on what the process is running as and the ownership of the files in /opt/splunkforwarder should help rule it in/out either way! Let me kno... See more...
Thanks, Im wondering if its a permissions issue. The details on what the process is running as and the ownership of the files in /opt/splunkforwarder should help rule it in/out either way! Let me know if you can get hold of this information. Thanks
Better to use RPM as then there are those pre and post scripts which are doing some cleaning etc. tasks which are not done if you are just unzipping that into /opt/splunk directory! And  with tgz yo... See more...
Better to use RPM as then there are those pre and post scripts which are doing some cleaning etc. tasks which are not done if you are just unzipping that into /opt/splunk directory! And  with tgz you must always do as root "chown -R splunk:splunk /opt/splunk" or whatever your splunk user is  before you start it after update!
I supposing that when you are running this  <search id="operational_hours"> <query>| savedsearch set_operational_hours</query> <finalized> <set token="operational_start_time">$result.o... See more...
I supposing that when you are running this  <search id="operational_hours"> <query>| savedsearch set_operational_hours</query> <finalized> <set token="operational_start_time">$result.operational_start_time$</set> It used some indexes or other KO which are not allowed for regular users. You should check it users can use that command on GUI and get results. If I remember correctly there was some restrictions to use at least loadjob in SHC, but I'm not sure if there are same kind of restrictions with savedsearch command? I couldn't find any mentions in docs. Then you must remember this: When the savedsearch command runs a saved search, the command always applies the permissions associated with the role of the person running the savedsearch command to the search. The savedsearch command never applies the permissions associated with the role of the person who created and owns the search to the search. This happens even when a saved search has been set up to run as the report owner. If you need to get this run as an owner instead of running user you must use ref on dashboard for those queries. But even then you cannot add/modify parameters which those searches accept and used. If you try to use those then splunk run those also as user not as owner. 
Hello @NanSplk01, If it is only the actions field that you're interested in the subsearch, you don't need to perform all of the other operations. But since you're using splunk_server=* in the second... See more...
Hello @NanSplk01, If it is only the actions field that you're interested in the subsearch, you don't need to perform all of the other operations. But since you're using splunk_server=* in the second search, here's something that might help you. | rest /servicesNS/-/-/saved/searches | search title=kafka* | rename dispatch.earliest_time AS "frequency", title AS "title", eai:acl.app AS "app", next_scheduled_time AS "nextRunTime", search AS "query", updated AS "lastUpdated", action.email.to AS "emailTo", action.email.cc AS "emailCC", action.email.subject AS "emailSubject", alert.severity AS "SEV" | eval severity=case(SEV == "5", "Critical-5", SEV == "4", "High-4",SEV == "3", "Warning-3",SEV == "2", "Low-2",SEV == "1", "Info-1") | eval identifierDate=now() | convert ctime(identifierDate) AS identifierDate | table identifierDate title lastUpdated, nextRunTime, emailTo, query, severity, emailTo | fillnull value="" | sort -lastUpdated | join type=left title [ | rest "/servicesNS/-/-/saved/searches" timeout=300 splunk_server=* | search disabled=0 AND title="kafka*" | fields title actions splunk_server | stats values(actions) as actions by title splunk_server]   Let me know if this helps your use case. Thanks, Tejas.   --- If the solution works, an upvote is appreciated..!!
splunk and root permission conflicts as per the logs permission errors   1- wget version in /opt 2- .tgz allocate splunk permission 3- stop the splunk services 4- run tgz via splunk user while u... See more...
splunk and root permission conflicts as per the logs permission errors   1- wget version in /opt 2- .tgz allocate splunk permission 3- stop the splunk services 4- run tgz via splunk user while upgrdaing This should work
what does indicates   06-19-2025 11:09:33.046 +0000 ERROR AesGcm [65605 MainThread] - Text decryption - error in finalizing: No errors in queue 06-19-2025 11:09:33.046 +0000 ERROR AesGcm [65605 Ma... See more...
what does indicates   06-19-2025 11:09:33.046 +0000 ERROR AesGcm [65605 MainThread] - Text decryption - error in finalizing: No errors in queue 06-19-2025 11:09:33.046 +0000 ERROR AesGcm [65605 MainThread] - AES-GCM Decryption failed! 06-19-2025 11:09:33.047 +0000 ERROR Crypto [65605 MainThread] - Decryption operation failed: AES-GCM Decryption failed! 06-19-2025 11:09:33.081 +0000 ERROR AesGcm [65605 MainThread] - Text decryption - error in finalizing: No errors in queue 06-19-2025 11:09:33.081 +0000 ERROR AesGcm [65605 MainThread] - AES-GCM Decryption failed! 06-19-2025 11:09:33.081 +0000 ERROR Crypto [65605 MainThread] - Decryption operation failed: AES-GCM Decryption failed!
Hello @eriktb, Did you try making DIRECT as a static and default value for the dependent dropdown. You can keep the makeresults section as it is to have different values populated based on the envir... See more...
Hello @eriktb, Did you try making DIRECT as a static and default value for the dependent dropdown. You can keep the makeresults section as it is to have different values populated based on the environment selection. However, if I understood the request properly, you need the DIRECT value selected by default for any of the environment. And, the case statement also ensures that for any of the environment value, DIRECT is one of the choice for the dependent dropdown.  Also, when I used your source code in my lab environment, it shows DIRECT as the selected value no matter what the environment dropdown selection is. Check the following screenshots for your reference: I'm on Splunk v9.3.2. Let me know if you meant something else and we can think about it.   Thanks, Tejas.
Hi @livehybrid  Thank you for your reply. We don't have an indexer cluster. I understand that if I create a cluster and have a replica bucket, it will be possible to protect buckets. Thank you for i... See more...
Hi @livehybrid  Thank you for your reply. We don't have an indexer cluster. I understand that if I create a cluster and have a replica bucket, it will be possible to protect buckets. Thank you for introducing the reference site. I will check it out. I am planning to restore a configuration file from EBS.Will Splunk still function if some of the logs in the EBS hot bucket and some of the logs in the S3 warm bucket are duplicated?
Hey @ilhwan, I believe as of now Splunk Support will only be able to provide you with the roadmap on when they plan to add support for Infoblox NIOS 9.0.3. Also, the last version was published in 20... See more...
Hey @ilhwan, I believe as of now Splunk Support will only be able to provide you with the roadmap on when they plan to add support for Infoblox NIOS 9.0.3. Also, the last version was published in 2022 and since they haven't planned a release, I doubt it's going to be anytime soon. However, you can publish the request on ideas.splunk.com and have it voted on high number to get their traction and probably develop a new release supporting newer version of Infoblox.  Additionally, if the timeline is far away for the support and you have a use case, you can opt for developing a custom TA. If you enjoy scripting, you can definitely create new add-on using Splunk Add-on Builder and UCC framework.  Thanks, Tejas. --- If the solution helps your use case, an upvote is appreciated..!!