All Topics

Top

All Topics

Hello, I would like to change table cell background color of  top 3 value of each column's search result . For example, top 3 value of column No.1 (50, 29, 25) need to be colored in column No. 1. ... See more...
Hello, I would like to change table cell background color of  top 3 value of each column's search result . For example, top 3 value of column No.1 (50, 29, 25) need to be colored in column No. 1. How can I change those cell background color?
Just seen this very old Answer concerning the XML view template: https://community.splunk.com/t5/Dashboards-Visualizations/Edit-the-default-dashboard-template/m-p/17091 This would seem to imply tha... See more...
Just seen this very old Answer concerning the XML view template: https://community.splunk.com/t5/Dashboards-Visualizations/Edit-the-default-dashboard-template/m-p/17091 This would seem to imply that onunloadCancelJobs is the default behaviour (generated into the XML preamble), but the docs indicate there is no default setting. I do not know enough about how splunk does all this to figure it out. I cannot find any other references except these two, apparently contradictory ones. Anybody on the inside care to clarify the actual situation? I'd like to know whether to add this rather useful property to all my views since I've only just found out about it. Also, if it's not the default behaviour....why not? Thank you, Charles
Hi,   I have some problems not being able to change the email in my account. My company is doing an email domain renewal and I need to change my email to a new email. Could you help me?   Thanks ... See more...
Hi,   I have some problems not being able to change the email in my account. My company is doing an email domain renewal and I need to change my email to a new email. Could you help me?   Thanks and Regards, Dano
Frequently asked questions about the Windows Private Synthetic Agent (PSA) deprecation, which started November 21, 2021 Watch this page for updates Click the 3-dot menu upper right, then subscrib... See more...
Frequently asked questions about the Windows Private Synthetic Agent (PSA) deprecation, which started November 21, 2021 Watch this page for updates Click the 3-dot menu upper right, then subscribe On October 8, 2021, we announced that we will be deprecating our Windows Private Synthetics Agent (PSA) due to limitations in Windows-based environments that have caused security vulnerabilities.  This article provides full details of the depreciation timeline along with instructions for moving to our Linux PSA to avoid any disruption to your services.  In this article... Why is AppDynamics deprecating its Windows-based PSA?  When will the Windows-based PSA be deprecated?  What is the alternative for Windows-based PSA and how do I install it?  Customer Notification Emails  Why is AppDynamics deprecating its Windows-based PSA? The Windows-based Private Synthetic Agent (PSA) has several known security vulnerabilities and information security compliance issues. Many of these issues are caused by limitations in the Windows-based environment. We were also running into limitations that prevented us from adding new features and capabilities to the Windows-based PSA.  To address these issues, we developed a new Linux-based PSA which is secure, scalable, reliable, cost-effective, and operationally efficient as an alternative to the Windows version. When will the Windows-based PSA be deprecated? The Windows-based PSA depreciation process was first announced October 21, 2021 and will be deprecated on November 21, 2021.  The Windows-based PSA will be supported for one year, until November 21, 2022. Any Urgent or High issues for the Windows-based PSA will be addressed during this period. However, we will not address any of the known security vulnerabilities associated with the Windows-based PSA.  What is the alternative for Windows-based PSA and how do I install it? AppDynamics’ Linux PSA is currently available starting with version 21.5. See the Private Synthetic Agent Linux-based agent in a Kubernetes container documentation.  PLEASE NOTE that the Linux-based PSA runs on a Kubernetes cluster and can be installed in: Minikube, Amazon EKS, Azure AKS, Google Kubernetes Engine (not supported for on-premise Synthetics servers), and Kubernetes on a Bare Metal server.  To install the Linux-based PSA, follow the instructions below: Upgrade the Synth server to version 21.4.1 or above. Download the Linux-based PSA from AppDynamics downloads portal Setup Kubernetes cluster and install agent
Hi All, I'm trying to get data tied together into one matrix from Jira (API fed) that utilizes two source types (shown below). What I need is each issue(key) to have the following attributes repres... See more...
Hi All, I'm trying to get data tied together into one matrix from Jira (API fed) that utilizes two source types (shown below). What I need is each issue(key) to have the following attributes represented as a column:  Column Name sourcetype key jira:issues:json team_name jira:issues:json created_weekNo jira:issues:json created_yearNo jira:issues:json creationDate jira:issues:json slaName jira:sla:json   Problem: The "key" is the unique identifier in this case that can marry the data sets, but I'm having trouble getting the "slaName" from my "jira:sla:json" to combine as a column with my "jira:issues:json". Side note the "key" will have multiple entries that need to be reflected as separate rows, but the column information obtained from the "jira:issues:json" will be static over the lifetime of the ticket (as this is just the created date).  Ask: If anyone has any best practices that could help me out it would be greatly appreciated. Using the subsearch and appendcols in getting confusing as I'm not looking for any stats functions right now, just getting the table together is the main goal to eventually turn this into a visualization.  Thanks for your help!
Hi Community - I'm trying to extend the Levenshtein distance query in this tutorial: https://www.splunk.com/en_us/blog/tips-and-tricks/you-can-t-hyde-from-dr-levenshtein-when-you-use-url-toolbox.html... See more...
Hi Community - I'm trying to extend the Levenshtein distance query in this tutorial: https://www.splunk.com/en_us/blog/tips-and-tricks/you-can-t-hyde-from-dr-levenshtein-when-you-use-url-toolbox.html. Specifically, I'm trying to evaluate the Levenshtein distance of an email domain against multiple comparison domains on one line, with the resulting values going into a multivalue field. I tried doing this with mvmap: | eval lev=mvmap(inspect_domains, `ut_levenshtein(ut_domain, inspect_domains)`) Where inspect_domains is the multivalue field containing comparative domains, and ut_levenshtein is a search macro in the URL Toolbox app . This returns an error: "Error in 'eval' command: The expression is malformed. Expected ). " To my eye, the parentheses appear to be balanced. I nevertheless tried adding or removing parentheses to try to make Splunk happy, but no combination of parentheses seems to work Any ideas?
Need help writing a request file1.csv  username src_ip John 192.168.16.35 Smith 172.167.3.43 Aram 132.56.23.3   file2.csv IP address ASN Other 192.168.16.0/24 1234 R... See more...
Need help writing a request file1.csv  username src_ip John 192.168.16.35 Smith 172.167.3.43 Aram 132.56.23.3   file2.csv IP address ASN Other 192.168.16.0/24 1234 RU 172.167.3.0/24 4321 AG 132.56.23.0/24 6789 BR   output  username src_ip asn other John 192.168.16.35 1234 RU Smith 172.167.3.43 4321 AG Aram 132.56.23.3 6789 BR     Thanks guys !!!!
HI There,   Can i please know how make the REQUEST_ID clickable from the below query. i want pass the REQUEST_ID from query1 to query2 when the user clicks on the REQUEST_ID in table from query1. ... See more...
HI There,   Can i please know how make the REQUEST_ID clickable from the below query. i want pass the REQUEST_ID from query1 to query2 when the user clicks on the REQUEST_ID in table from query1.     Query 1: index=<<index_name>> | dedup REQUEST_ID | table USER_ID, ENTITY_TYPE, ENTITY_ID, REQUEST_ID, STATUS | where USER_ID="123123123"   Query 2: Index=<<index_name>> "error" | where $REQUEST_ID$             Thank you
Hello, I have some SQL trc binary files need to be ingested into SPLUNK from SQL server where UF has already been installed.  I am considering to use MS SQL TA which allows us to convert trc binary ... See more...
Hello, I have some SQL trc binary files need to be ingested into SPLUNK from SQL server where UF has already been installed.  I am considering to use MS SQL TA which allows us to convert trc binary files to text files and send to SPLUNK indexer/SH. My question is where I should deploy the SQL TA ....in UF/ HF/SH.  Thank you so much, appreciate your support in these efforts.
Please help me fix this SPL to produce the license usage listed above. Thx a million This is not working for me: index="_internal" | stats sum(GB) as A by Date, idx | eventstats max(A) as B by id... See more...
Please help me fix this SPL to produce the license usage listed above. Thx a million This is not working for me: index="_internal" | stats sum(GB) as A by Date, idx | eventstats max(A) as B by idx | where A=B | dedup A idx | sort idx | table Date,A idx  
I'm using tstats on an accelerated data model which is built off of a summary index. Everything works as expected when querying both the summary index and data model except for an exceptionally large... See more...
I'm using tstats on an accelerated data model which is built off of a summary index. Everything works as expected when querying both the summary index and data model except for an exceptionally large environment that produces 10-100x more results when running dc().   This works fine in said environment and produces 17,000,000~:   | tstats summariesonly=true count(assets.hostname) from datamodel="Summary_Host_Data" where (earliest=-1d latest=now)   This produces 0 results, which should be around 400,000~:   | tstats summariesonly=true dc(assets.hostname) from datamodel="Summary_Host_Data" where (earliest=-1d latest=now)   Even though the summary index works fine and produces 400,000~:   index=summary_host_data earliest=-1d | stats dc(hostname)   Finally, if I search over 6 hours instead of 1d, I do get results from the tstats using dc(). Is there some type of limit I'm running into with dc()? Or is there something else going on?
I just set up a heavy forwarder with the AWS add-on.  I launched the app and went to the configuration page but all I get is the spinning 'Loading' icon. I looked over the previous question like thi... See more...
I just set up a heavy forwarder with the AWS add-on.  I launched the app and went to the configuration page but all I get is the spinning 'Loading' icon. I looked over the previous question like this: https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-Add-on-for-AWS-Hangs/m-p/316801   But I'm not using any Google services at all nor do I have any file "/etc/boto.cfg ".   When I look through the Splunkd logs I see this:   +0000 ERROR AdminManagerExternal [1021 TcpChannelThread] - Stack trace from python handler: Traceback (most recent call last): File "/home/y/var/splunk/lib/python3.7/site-packages/splunk/admin.py", line 114, in init_persistent hand.execute(info) File "/home/y/var/splunk/lib/python3.7/site-packages/splunk/admin.py", line 637, in execute if self.requestedAction == ACTION_LIST: self.handleList(confInfo) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_rh_settings.py", line 58, in handleList entity = client.Entity(self._service, uri % (service, LOGGING_ENDPOINTS[service])) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 900, in __init__ self.refresh(kwargs.get('state', None)) # "Prefresh" File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 1039, in refresh self._state = self.read(self.get()) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 1009, in get return super(Entity, self).get(path_segment, owner=owner, app=app, sharing=sharing, **query) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 766, in get **query) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 290, in wrapper return request_fun(self, *args, **kwargs) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 71, in new_f val = f(*args, **kwargs) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 680, in get response = self.http.get(path, all_headers, **query) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 1184, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 1242, in request response = self.handler(url, message, **kwargs) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 1383, in request connection.request(method, path, body, head) File "/home/y/var/splunk/lib/python3.7/http/client.py", line 1277, in request self._send_request(method, url, body, headers, encode_chunked) File "/home/y/var/splunk/lib/python3.7/http/client.py", line 1323, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/home/y/var/splunk/lib/python3.7/http/client.py", line 1272, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/home/y/var/splunk/lib/python3.7/http/client.py", line 1032, in _send_output self.send(msg) File "/home/y/var/splunk/lib/python3.7/http/client.py", line 972, in send self.connect() File "/home/y/var/splunk/lib/python3.7/http/client.py", line 1439, in connect super().connect() File "/home/y/var/splunk/lib/python3.7/http/client.py", line 944, in connect (self.host,self.port), self.timeout, self.source_address) File "/home/y/var/splunk/lib/python3.7/socket.py", line 728, in create_connection raise err File "/home/y/var/splunk/lib/python3.7/socket.py", line 716, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused Which looks like it's trying to autoconfigure itself like mentioned in:https://docs.splunk.com/Documentation/AddOns/released/AWS/Setuptheadd-on#Find_an_IAM_role_within_your_Splunk_platform_instance   But this is not running in AWS so I planned to manually configure it.  However I cannot because of the above problem.  How do I fix this?   Splunk-8.2.2 Splunk AWS add-on 5.2.0
Hello Community! Splunk Certification wants to remind everyone that YOU (and your friend, neighbor, and coworker) can get a discount on your next certification exam. Register for a Splunk Cer... See more...
Hello Community! Splunk Certification wants to remind everyone that YOU (and your friend, neighbor, and coworker) can get a discount on your next certification exam. Register for a Splunk Certification exam with testing partner PearsonVUE between October 15-25 and use code braggingrightsIRL at checkout for a $50 certification exam (that's a 60% discount!). Exam appointments are available through January 31, so you can register now and test when you're ready. Terms and conditions apply.              
The case at https://community.splunk.com/t5/Getting-Data-In/Issue-on-file-monitoring-using-forwader/m-p/478063#M82045 is similar. When files are being ftp'ed to the location we see in _internal erro... See more...
The case at https://community.splunk.com/t5/Getting-Data-In/Issue-on-file-monitoring-using-forwader/m-p/478063#M82045 is similar. When files are being ftp'ed to the location we see in _internal errors that the file can't be read. Comes the weekend and this host is being rebooted and the files are being ingested. We looked at MonitorNoHandle that allows reading while the file is being written on Windows but MonitorNoHandle only allows one such file per stanza. We asked the customer to ftp the files to another directory and move them later via a script but the customer wasn't thrilled about this idea. We also thought that maybe there is a way to have the UF check for new files multiple times before putting them in the black list and it doesn't seem to be possible. What can we do?      
I have a field named failcode with numerous fail code names structured like this: date failcode count 2021-10-01 g-ab 123 2021-10-01 g-bc 258 2021-10-01 g-cd 369 2021-10-01 c... See more...
I have a field named failcode with numerous fail code names structured like this: date failcode count 2021-10-01 g-ab 123 2021-10-01 g-bc 258 2021-10-01 g-cd 369 2021-10-01 c-ab 456 2021-10-01 c-bc 124 2021-10-01 c-cd 325 2021-10-01 d-ab 854 2021-10-01 d-bc 962 2021-10-01 d-cd 362 2021-10-01 d-dd 851 2021-10-02 g-ab 963 2021-10-02 g-bc 101 2021-10-02 g-cd 171 2021-10-02 c-ab 320 2021-10-02 c-bc 214 2021-10-02 c-cd 985 2021-10-02 d-ab 165 2021-10-02 d-bc 130 2021-10-02 d-cd 892 2021-10-02 d-dd 964 2021-10-03 g-ab 653 2021-10-03 g-bc 285 2021-10-03 g-cd 634 2021-10-03 c-ab 689 2021-10-03 c-bc 752 2021-10-03 c-cd 452 2021-10-03 d-ab 365 2021-10-03 d-bc 125 2021-10-03 d-cd 691 2021-10-03 d-dd 354   I want to only keep certain codes: g-ab, c-cd, and d-dd and not display the rest in my results. Essentially I just want to display certain results from my failcode column. 
I have a field called alphabet that stores multiple values. I want to create a search that only returns events that has only one of those values. In this example I want to return all events where onl... See more...
I have a field called alphabet that stores multiple values. I want to create a search that only returns events that has only one of those values. In this example I want to return all events where only field = A exists Event examples 'index = test WHERE alphabet=a' does not work as it returns all events where 'a' exists including with other values. For the three events below I would like a search that only returns event 3. How can I build a search that does this? Event 1  alphabet = a,b,c,d   Event 2 alphabet = a,b   Event 3 alphabet = a    
Does anyone know if there is a native k8s installation package available for splunk-connect-for-snmp?   The installation documentation I find all references microk8s using Ubuntu. Thanks, jb  
I trying to implement Splunk across multiple domains. Due to company policy some domains don't have access to internet and as a result servers in those domain cannot communicate with indexers located... See more...
I trying to implement Splunk across multiple domains. Due to company policy some domains don't have access to internet and as a result servers in those domain cannot communicate with indexers located in Splunk cloud.  What is the best way to send data to indexer in this case? One of the suggestions we received is to use heavy forwarder as intermediate. But it will introduce single point of failure so we might need to implement load balancer. Is their any other way possible?
Hello, In our environment we have Splunk HF with 2 parallel Ingestion Pipelines. https://docs.splunk.com/Documentation/Splunk/8.2.2/Capacity/Parallelization#Index_parallelization One of the aim of... See more...
Hello, In our environment we have Splunk HF with 2 parallel Ingestion Pipelines. https://docs.splunk.com/Documentation/Splunk/8.2.2/Capacity/Parallelization#Index_parallelization One of the aim of those Splunk HF is to offload the Splunk Indexer on parsing Pipeline, Merging Pipeline and Typing Pipeline. Due to that the data coming from Splunk HF are already "processed" and our Indexer are mostly processing them only in the Index Pipeline. https://wiki.splunk.com/Community:HowIndexingWorks On the Indexers we only have 1 Ingestion Pipeline, the CPU Cores used for indexing are typically 4-6. Does our Indexers are taking advantage using pretty much all the 4-6 CPU Cores for the Index Pipeline only OR they are "wasted" on the other mostly idle pipelines? Thanks a lot, Edoardo
Hi all, I have a simple question in mind. I know that the communication between splunk instances is partly per default encrypted and the cluster communication is not. At least not per default.  Is... See more...
Hi all, I have a simple question in mind. I know that the communication between splunk instances is partly per default encrypted and the cluster communication is not. At least not per default.  Is there a splunk recommendation to leave it unencrypted? Are there existing recommendations for intra Index Cluster and intra Search Head Cluster communication? What options are there actually? Kind regards, O.