All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dear all, How many of you faced the issue that Intersplunk library drops this error: AttributeError: module 'splunk.util' has no attribute 'OrderedDict' In the commands.conf I did not specify th... See more...
Dear all, How many of you faced the issue that Intersplunk library drops this error: AttributeError: module 'splunk.util' has no attribute 'OrderedDict' In the commands.conf I did not specify the python version. If I specify python version 2 the script is working. But here comes the other funny thing that splunk util file is the same in both site-package folder so I do not really understand what is going on here. By the way the same script is working fine in 8.1.1 no matter which version I provide in commands.conf file. Thanks,
Hello, Posting here checks off a huge bucket list for me! I am hoping what I am sharing is a known, and has a known solution that I have been unable to locate. We have ~90 different services on AWS... See more...
Hello, Posting here checks off a huge bucket list for me! I am hoping what I am sharing is a known, and has a known solution that I have been unable to locate. We have ~90 different services on AWS EKS clusters, mixed languages, standards (or lack of) and have a need to migrate our current logging solution (log -> cloudwatch -> lambda -> splunk UF -> index cluster) from a cloudwatch based solution to a splunk-connect-for-kubernetes based solution. The only problem with the existing solution is that using CW is a little pricey and if we can simplify our monitoring while saving money, and reduce the delay to getting logged events into in splunk,  even better. Everything is working with splunk-connect-for-kubernetes, except for multi-line events (java stacktraces, mssql errors,. etc). Everything we have tried so for to keep these events together as a single multi-line event have failed, with each event getting broken into multiple single event snippets. We think it might be possible in theory to write service specific fluentd filters, for all 90 services, each one following at least one eventing pattern, but suspect this is not a feasible approach for the long term. We acknowledge this might make a great use-case to revisit and prioritizing implementing standard logging for all the services, but feel that will be a hard sell because of the need to deliver a working solution sooner than later if possible. Looking at the boards, I see that multiline is community supported, and the closest relevant issue I found is: https://github.com/splunk/splunk-connect-for-kubernetes/issues/372 This issue exactly details the problem, but while the last two snippets in the thread are promising, the proposed solution would not work as it seems to depend on a regular character sequence to start every new event  and this other issue may be related: https://github.com/splunk/splunk-connect-for-kubernetes/issues/459 We are really hoping for a generic solution that will match the myriad logging patterns in our services, without having to define matching primary and secondary filters for every log structure variation that is currently present in our logs as per this ref: https://github.com/splunk/splunk-connect-for-kubernetes/issues/255#issuecomment-639915496 Thank you all for getting though my long post!        
Should Splunk be connected to internet , have internet access? What are the pluses & minuses ?
Just in case it's helpful for anyone, here are some simple commands you can run from Windows PowerShell to uninstall SplunkUniversalForwarder from Windows.  This is especially useful if you have a lo... See more...
Just in case it's helpful for anyone, here are some simple commands you can run from Windows PowerShell to uninstall SplunkUniversalForwarder from Windows.  This is especially useful if you have a lot of Windows servers to uninstall from, as this solution could easily be scripted.  (Uninstalls on Linux are much easier to script.) Per the official Splunk documentation you need to know the exact name of the installation program.  This solution doesn't require prior knowledge and could be extended for uninstalling any MSI-installed program. & "C:\Program Files\SplunkUniversalForwarder\bin\splunk" stop $productCode = get-wmiobject Win32_Product -Filter "name='UniversalForwarder'" | % { $_.IdentifyingNumber } msiexec /x$productCode /qn Again, these commands need to be run from PowerShell, not a regular command prompt.  The get-wmiobject command can take several minutes to run.  After coming up with this I found similar solutions on StackOverflow here, including one that uses the registry that supposedly performs better.  I didn't try any of these out.  All of these techniques can run into problems if there are multiple programs that have the same names, but that's unlikely in this case. In case you're wondering why I didn't reference %SPLUNK_HOME% in the first command, it's because this environment variable is not set on our servers for some reason, presumably due to the way Splunk was originally installed.
i have application logs which contain a message template in a json field (@mt) to convert other json fields into a human readable message. The content of that field will look like {RequestProtocol} ... See more...
i have application logs which contain a message template in a json field (@mt) to convert other json fields into a human readable message. The content of that field will look like {RequestProtocol} {RequestMethod} {RequestPath} responded {StatusCode}. the text in the {} corresponds to the keys in key value pairs in the rest of the json i made this eval eval message="\"".replace(replace(spath(_raw, "@mt"),"{", "\"."),"}",".\"")."\"" this returns message="".RequestProtocol." ".RequestMethod." ".RequestPath." responded ".StatusCode."" it dispays the key names rather than the values in that same search i have  eval test=" ".RequestProtocol." ".RequestMethod." ".RequestPath." responded ".StatusCode."" that returns test=HTTP/1.1 POST /path/to/thing responded 202 this returns the values.  how can i create an eval that forms the message from the template? i would love to make this part of the sourcetype 
Hello I need to update Java Patch October 2020 and Jan 2021. Will this cause any issues with splunk 8.0.50?
so in a simple four Splunk server test lab setup I have one Cluster Master, two Indexers, and one Deployment Server before I knew how to create indexes properly in a clustered peer environment (ie u... See more...
so in a simple four Splunk server test lab setup I have one Cluster Master, two Indexers, and one Deployment Server before I knew how to create indexes properly in a clustered peer environment (ie using the indexes.conf file and pushing it as a bundle via the Cluster Master) I created a couple indexes locally via the Web Console on Indexer1 how do I replicate those to Indexer2 now? Please and thank you, G  
Hi, I am evaluating Splunk Cloud (Free Trial for now) and running into the problem while submitting logs to HEC:   SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signe... See more...
Hi, I am evaluating Splunk Cloud (Free Trial for now) and running into the problem while submitting logs to HEC:   SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate in certificate chain) (OpenSSL::SSL::SSLError)   This is a Ruby on Rails application configured to use HTTPS HEC endpoint and using a self-signed certificate.  Everything works normally when SSL verification mode set to none. The certificate is good since I could use it to establish an HTTPS connection to some other non-splunk endpoints. So I guess something on Splunk side rejecting self-signed certificates. Unfortunately, I cannot find any Splunk UI to manage this behavior, probably because it is a Free Trial. So my questions are: am I am missing some settings in UI? Or it is configurable in non Trial mode? Or it just requires a non-self-signed certificate?   Thanks!
Hello, I am monitoring my Symantec Web Security Services data via the corresponding app.  My daily ingest is 7287.00 MB per day.  This is an app that gets used.  I understand monitoring end points c... See more...
Hello, I am monitoring my Symantec Web Security Services data via the corresponding app.  My daily ingest is 7287.00 MB per day.  This is an app that gets used.  I understand monitoring end points can use a lot of data.  But...has anyone had any success reducing the WSS data usage and still getting the data you need.  I would really like to reduce this by another 2000MB, but I am out of solutions.   Thank You.
I am stuck. I am using the Splunk Add-on Builder with a Python data input and I want to print my variables that are set in "Data Input Definition" area to the "output" area. I can't get anything to p... See more...
I am stuck. I am using the Splunk Add-on Builder with a Python data input and I want to print my variables that are set in "Data Input Definition" area to the "output" area. I can't get anything to print. Not even "print("HELLO")" will show up. I tried the logging functions and they don't work. I have a text data input called "splunk_hec_url". In " def collect_events" I have this: print("testing") splunk_hec_url_ID= helper.get_arg('splunk_hec_url') print(splunk_hec_url_ID) When I click "test", nothing shows up in the output section. What am I doing wrong? How do I get anything to print to the output section? I tried "helper.log_info("log message")" and that doesn't show up either.
used splunk remove shcluster-member and removed an existing cluster , then after the serachhead restart tried to add it back, but got below error . please let me know how to add them back    /splun... See more...
used splunk remove shcluster-member and removed an existing cluster , then after the serachhead restart tried to add it back, but got below error . please let me know how to add them back    /splunk add shcluster-member -current_member_uri https://<ip-xxx>:<9089> In handler 'shclustermemberconsensus': Search Head Clustering is not enabled on this node. REST endpoint is not available  
Hi, I have this in my message string: Errors in file /u02/app/oracle/diag/rdbms/pwein1a/pwein1a1/trace/pwein1a1_cjq0_287471.trc: ORA-12850: Could not allocate slaves on all specified instances: nee... See more...
Hi, I have this in my message string: Errors in file /u02/app/oracle/diag/rdbms/pwein1a/pwein1a1/trace/pwein1a1_cjq0_287471.trc: ORA-12850: Could not allocate slaves on all specified instances: needed, allocated ORA-16401: archive log rejected by Remote File Server (RFS)   I would like to extract in a search only the substring: ORA-nnnnn   Any ideas, I tried every solution available here in the community. Bu I am fairly new to Splunk.   Thanks Pierre
My Environment: 1 SH, 1DS (CM, LM), 2 INDX`s and 15GB/day License. The day before yesterday logs ingested to Splunk from two days to the License is going very high >daily limit. Yesterday I recognis... See more...
My Environment: 1 SH, 1DS (CM, LM), 2 INDX`s and 15GB/day License. The day before yesterday logs ingested to Splunk from two days to the License is going very high >daily limit. Yesterday I recognised and disabled the source of logs. So the license was going normal now. When I saw today, the license report is not showing today's usage report. Attaching the snapshots to understand. please advise how to get back to the normal stage.
How can i display data in index Mint in Dashboards app Splunk MINT?  So, how can i send data to Splunk MINT app and visualize them in dashboards?
Hi, I am working an setting up a alert where I need to count if there have been more than 50 count of errors in last 30 minutes. And if there is then I need to send the alert with those pages and co... See more...
Hi, I am working an setting up a alert where I need to count if there have been more than 50 count of errors in last 30 minutes. And if there is then I need to send the alert with those pages and count. Something like below requested_content Status Count /my-app/1.html 500 20 /my-app/2.html 500 40     60   Now the alert should only trigger if the sum of these counts > 50 like above. I have written a query but it only gives the count and not the pages which are throwing the error. I want to see the pages too index=myindex_prodsourcetype=ssl_access_combined requested_content="/my-app/*" status=50* | stats count by status | where count > 50 Can someone able to advice on this how to achieve this? I want the alert to be triggered and it should output the tabular format with pages and it's count with total count > 50
Hi there Splunkers, Maybe the title is a little bit weird but the point is, We have an entity who travel between 2 localizations and we can only register the time when he leave our localization, t... See more...
Hi there Splunkers, Maybe the title is a little bit weird but the point is, We have an entity who travel between 2 localizations and we can only register the time when he leave our localization, the problem is I want to calculate the amount of time the entity was outside delivering items. So the data we managed is something like this index=main vehicle_patent=LXHH63 Date=1/7/2021 | sort + id | table id, vehicle_patent, Date, Time ID vehicle patent Date Time 63 LXHH63 1/7/2021 5:24:12 73 LXHH63 1/7/2021 12:07:05 76 LXHH63 1/7/2021 14:43:57 79 LXHH63 1/7/2021 16:49:49 85 LXHH63 1/7/2021 18:56:31 86 LXHH63 1/7/2021 22:07:51   As you can see the ID value changes because of each entity leaving the place, and this is filtered to only show 1 entity a specific day. And my output table should look something like this, ID vehicle patent Date Time Time_Before* Time_After* Diff 63 LXHH63 1/7/2021 5:24:12 5:24:12 5:24:12 0:00:00 73 LXHH63 1/7/2021 12:07:05 5:24:12 12:07:05 6:42:53 76 LXHH63 1/7/2021 14:43:57 12:07:05 14:43:57 2:36:52 79 LXHH63 1/7/2021 16:49:49 14:43:57 16:49:49 2:05:52 85 LXHH63 1/7/2021 18:56:31 16:49:49 18:56:31 2:06:42 86 LXHH63 1/7/2021 22:07:51 18:56:31 22:07:51 3:11:20 *: Optional to be shown Is this even possible or I'm asking a bit to much? I've been trying for a whole week and no result so far. Thanks before hands.
Hello , We are trying to install AppD K8S Monitoring Agent . And I follow the ReadMe file in the Git : https://github.com/Appdynamics/appdynamics-operator to install the AppD Operator and Agent in m... See more...
Hello , We are trying to install AppD K8S Monitoring Agent . And I follow the ReadMe file in the Git : https://github.com/Appdynamics/appdynamics-operator to install the AppD Operator and Agent in my AWS EKS . Evidence: $ kubectl -n appdynamics get pods NAME                                    READY   STATUS    RESTARTS   AGE appdynamics-operator-5d7c495bd4-xjqkc   1/1     Running   0          72m k8s-cluster-agent-6b46b878f-jnmw9       1/1     Running   0          55m   But in the Controler -> Server -> Cluster , i can only see my EKS and its version . When i click the "Detail" button , nothint can be shown .   In the k8s-cluster-agent log , it keeps generating the below warning log : ----------------------------------------------- [INFO]: 2021-03-05 13:49:55 - agentregistrationmodule.go:145 - Successfully registered agent again appd-k8s-agent [WARNING]: 2021-03-05 13:49:55 - agentregistrationmodule.go:266 - Agent is not licensed [INFO]: 2021-03-05 13:50:55 - agentregistrationmodule.go:122 - Registering agent again [INFO]: 2021-03-05 13:50:55 - agentregistrationmodule.go:145 - Successfully registered agent again appd-k8s-agent [WARNING]: 2021-03-05 13:50:55 - agentregistrationmodule.go:266 - Agent is not licensed -----------------------------------------------   And in the Controller Web Page , i got the following error : ----------------------------------------------- Agent license request denied. Agent type: Server Visibility; Host: appd-k8s-agent; Reason: Not licensed for account  -----------------------------------------------   Any idea will be appreciated .
I have following query to display the results in pie chart. Problem here is I could not see the all the values in the pie chart   index=dummy ticket_number="*" sourcetype="tickets" | eval status= ... See more...
I have following query to display the results in pie chart. Problem here is I could not see the all the values in the pie chart   index=dummy ticket_number="*" sourcetype="tickets" | eval status= "incident_" + status | stats first(opened_at) as ticket_openedAt latest(status) as ticketStatus by ticket_number | where NOT ticketStatus IN("ticket_Resolved", "ticket_Canceled", "ticket_Closed") | eval openTime = strptime(ticket_openedAt, "%Y-%m-%d %H:%M:%S"), currentTime=now(), days = round((currentTime - openTime)/86400, 0) | where days > 5 | stats count as ticket_count by ticketStatus | appendcols [ search index=dummy problem_number="*" sourcetype="problem"     | eval status = "problem_" + status     | stats first(opened_at) as problemOpenedAt latest(status) as problemStatus by problem_number     | where NOT problemStatus IN("problem_Resolved", "request_Closed")     | eval openTime = strptime(requestOpenedAt, "%Y-%m-%d %H:%M:%S"), currentTime=now(), days = round((currentTime - openTime)/86400, 0)     | where days > 5     | stats count as request_count by problemStatus ] | appendcols [ search index=dummy issue_number="*" sourcetype="issue"     | eval status= "problem_" + status     | stats first(opened_at) as issueOpenedAt latest(status) as issueStatus by issue_number     | where NOT issueStatus IN("problem_Resolved", "problem_Closed Complete")     | eval openTime = strptime(problemOpenedAt, "%Y-%m-%d %H:%M:%S"), currentTime=now(), days = round((currentTime - openTime)/86400, 0)     | where days > 5     | stats count as problem_count by issueStatus ] | transpose I would require your help in displaying the incident_count by incidentStatus, problem_count by problemStatus and issue_count by issueStatus in the pie chart. Also, is there a way to optimize this search
Hi Everyone, I have one search query as below:   index=abc ns=hjk (nodeUrl ="*") Trace_Id=* "*" | stats count by Trace_Id Span_Id ns app_name Log_Time caller nodeUrl nodeHttpStatus nodeResponseTim... See more...
Hi Everyone, I have one search query as below:   index=abc ns=hjk (nodeUrl ="*") Trace_Id=* "*" | stats count by Trace_Id Span_Id ns app_name Log_Time caller nodeUrl nodeHttpStatus nodeResponseTime |rename caller as "Caller"|rename nodeUrl as "Node" |rename nodeHttpStatus as "NodeHttpStatus"|rename nodeResponseTime as "NodeResponseTime"| fields - count|replace "https://tyu/datagraphaccountnode/graphql" with "Account"|replace "https:/fgh/datagraphassetnode/graphql" with "Asset|where NodeResponseTime >5000   I want to trigger this hourly. How can I do this.  
Guys I have the following .csv file that needs to be captured by Universal Forwarder, but the data is coming in messy. Could you help me how to create a sourcetype so that they are indexed in the or... See more...
Guys I have the following .csv file that needs to be captured by Universal Forwarder, but the data is coming in messy. Could you help me how to create a sourcetype so that they are indexed in the order of the first line?   "Data","Site","Tipo","Agencia","Posicao","RCAF","Nome","Status","mes_ano" "04/03/2021","SP","Agência","1010","AS","TESTE","Claudio A.","OnHook (01:00:00)","03-2021" "04/03/2021","","Agência","","Consultor","32323232","Claudio A.","OnHook (10:00:41)","03-2021"