All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We can connect successfully with Salesforce TA and grab data from common objects, like Audit Trail. However, EventLogFile type can't log anything and throw those errors.   2021-08-09 20:23:01,8... See more...
We can connect successfully with Salesforce TA and grab data from common objects, like Audit Trail. However, EventLogFile type can't log anything and throw those errors.   2021-08-09 20:23:01,824 +0000 log_level=ERROR, pid=24066, tid=MainThread, file=engine_v2.py, func_name=start, code_line_no=57 | [stanza_name=QA_EVENTLOG] CloudConnectEngine encountered exception Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_salesforce/lib/cloudconnectlib/core/engine_v2.py", line 52, in start for temp in result: File "/opt/splunk/etc/apps/Splunk_TA_salesforce/lib/cloudconnectlib/core/job.py", line 88, in run contexts = list(self._running_task.perform(self._context) or ()) File "/opt/splunk/etc/apps/Splunk_TA_salesforce/lib/cloudconnectlib/core/task.py", line 289, in perform raise CCESplitError cloudconnectlib.core.exceptions.CCESplitError 2021-08-09 20:23:01,822 +0000 log_level=ERROR, pid=24066, tid=MainThread, file=task.py, func_name=_send_request, code_line_no=505 | [stanza_name=QA_EVENTLOG] The response status=400 for request which url=MYCOMPANY--qa.my.salesforce.com/services/data/v51.0/query?q=SELECT%20Id%2CEventType%2CLogDate%20FROM%20EventLogFile%20WHERE%20LogDate%3E%3D2020-01-01T00%3A00%3A00.000z%20AND%20Interval%3D%27Daily%27%20ORDER%20BY%20LogDate%20LIMIT%201000 and method=GET and message=[{"message":"\nLogDate>=2020-01-01T00:00:00.000z AND Interval='Daily' ORDER BY LogDate\n ^\nERROR at Row:1:Column:91\nNo such column 'Interval' on entity 'EventLogFile'. If you are attempting to use a custom field, be sure to append the '__c' after the custom field name. Please reference your WSDL or the describe call for the appropriate names.","errorCode":"INVALID_FIELD"}] We just checked and all permissions are corrected (View Event Log Files / View All Data e API Enabled).
Hello all, I was wondering if I could please get some suggestions on why Tomcat isn't honoring my pattern values. I am following the instructions here:  https://docs.splunk.com/Documentation/AddOns/... See more...
Hello all, I was wondering if I could please get some suggestions on why Tomcat isn't honoring my pattern values. I am following the instructions here:  https://docs.splunk.com/Documentation/AddOns/released/Tomcat/Recommendedfields As recommended by Splunk documentation, we setup the following in className="org.apache.catalina.valves.AccessLogValve " in of server.xml prefix="localhost_access_log_splunk" suffix=".txt" pattern="%t, x_forwarded_for=?%{X-Forwarded-For}i?, remote_ip=?%a?,.... The filename and fields log as expected. The only issue is instead of quotation (") marks, I am just seeing question marks (i.e. ...x_forwarded_for=?-?, remote_ip=?1.2.3.1?, remote_host=?1.2.3.2?,..) Splunk Add-on for Tomcat: https://splunkbase.splunk.com/app/2911/  
I have network logs that show various network device communication that are in an index in Splunk.  I have another index that has information about the devices that I need to report on.  But I'm havi... See more...
I have network logs that show various network device communication that are in an index in Splunk.  I have another index that has information about the devices that I need to report on.  But I'm having issues because the network logs are summarizing the network activity and showing all the devices with the same activity, as seen below: How can I get the individual information about the devices and/or how can I enumerate the information above.  If I send to a table, the device_ids will be blank, even if there is only one device in the list.
Hi there, I have a csv lookup file consisting of sender email addresses.  I'd like to search the splunk logs for all the entries with these SenderAddresses over the last 90 days to determine what Fro... See more...
Hi there, I have a csv lookup file consisting of sender email addresses.  I'd like to search the splunk logs for all the entries with these SenderAddresses over the last 90 days to determine what FromIP they have.  What search syntax do I use? file has been uploaded to Splunk and is called AllSenders.csv.  it has heading email, flag...all the flag are set to 1 since I want to search them all.   In general, to search the logs for email i use:   index=app_messagetrace sourcetype=ms:o365:reporting:messagetrace Thanks in advance....let me know what other info you need to help 
Our network uses a PKI (client and server certificate) authentication system.  The Splunk administrators are not allowed to open the management port (8089) to allow API queries, so I have been trying... See more...
Our network uses a PKI (client and server certificate) authentication system.  The Splunk administrators are not allowed to open the management port (8089) to allow API queries, so I have been trying to use the webport to mimic the browser interaction to create a search job.  I use the Developer Tools in the browser to watch the API calls to get the session ID and cookies/tokens, and I pass them to the correct endpoints just like the browser.  (I'm using the requests python library.)   No matter what I do, each time I GET or POST I am redirected to a 'login?session_expired=1' endpoint, which then redirects me to the original endpoint I intended to reach.  This works fine for a GET since I reach the resource I was trying to get to.  With a POST (creating the search job) a redirection changes the POST to a GET - so I can only retrieve the status of existing jobs instead of creating a new job.   If it adds additional context, the paths are built like this (scoped to name space): When I'm trying to GET a job slot: httpx://the.splunk.domain/en-US/splunkd/__raw/servicesNS/myusername/search/saved/searches/_new   When I'm trying to POST a new job:   httpx://the.splunk.domain/en-US/splunkd/__raw/servicesNS/myusername/search/search/jobs   I pass it all of the headers I see in the browser request including the X-Splunk-Form-Key and Cookie field.  If I don't include those fields the connection is rejected, so I know it is accepting those for validity.  When I include the header {'X-Requested-With': 'XMLHttpRequest'} I get a 401 Denied error every time (but the browser is passing it).   I tried REPEATEDLY to use the code block feature but it failed every time.  Code block listed in my own reply below.   The request follows the redirection and ends up making a GET to the jobs endpoint, which returns existing jobs and NOT POSTing the new job.   I don't know what else it want's in order to accept the POST data without first redirecting me through the login endpoint.   I need to understand what a Splunk instance in our environment needs to accept an authenticated connection without redirecting me to a login endpoint first.  If anyone has experience with this server configuration and can help, I would really appreciate it.  Thank you!
I've been having a hard time trying to get a Splunk search that will give me a count of all records in my Lead object in Salesforce where OwnerId = Id of the queue I'm using to manage intake and crea... See more...
I've been having a hard time trying to get a Splunk search that will give me a count of all records in my Lead object in Salesforce where OwnerId = Id of the queue I'm using to manage intake and created date = Today, but every time I search our index I'm getting way more records than I should (last check was 513 in Splunk and 413 in SFDC Production). My query is below. Just to explain the current state of the query and what I've been trying, I've created a CleanNow that's just today's date in Year-Month-Day and a CleanCreatedDate converting the Salesforce CreatedDate to the same Year-Month-Day, and my last attempt to limit the search scope was a subselect to find Ids for records where the owner is not my queue and dump them. The added date columns on the table are just to "idiot check" and try to find why I'm getting a delta. index=sfdc sourcetype=sfdc:lead OwnerId="[Id of my queue]" | eval CleanNow=strftime(now(), "%Y-%m-%d") | eval CleanCreatedDate=strftime(strptime(CreatedDate,"%Y-%m-%d"),"%Y-%m-%d") | where CleanCreatedDate=CleanNow | search NOT [search sourcetype=sfdc:lead OwnerId!="[Id of my queue]" | fields Id] | table Id, Status, OwnerId, CleanNow, CleanCreatedDate, CreatedDate, LastModifiedDate Looking at what the Splunk query gives me and searching Prod with those Ids, I can see Splunk is returning things to me as New / owned by my queue that in Prod are actually Converted / assigned to a human. And not even ones that were just now converted, I mean ones that were converted hours ago. I took one of those Ids returned by Splunk as New / owned by the queue but Prod said was converted hours ago, and did a search  index=sfdc sourcetype=sfdc:lead Id="[Id of the delta record]" And I get three events. 1. Status is New and is owned by my queue (this is what I actually want to see in the return) 2. Status is New and it's owned by a human being (do not want to see) 3. Status is Converted and it's owned by a human being (do not want to see) That Id only appears once in the results from Splunk, and the fact that we had two other entries that don't match what I consider a "hit" means I should not have seen it at all, including the one time it logged as New / owned by the queue. Not entirely sure I'm explaining it right, but basically I need to find a way to recursively search the results to find any Ids that match the original query, but are in there a second (or third, or fourth...) time with a different owner, and drop them from the return.
Hello I have a query that gives me the data below: _time                                 | id                 | order_id   | job             | user_id ---------------------------------------------... See more...
Hello I have a query that gives me the data below: _time                                 | id                 | order_id   | job             | user_id ------------------------------------------------------------------------------------ 2021-06-08 17:00:00 | 2240905 | -                   | done         | 23 ------------------------------------------------------------------------------------ 2021-06-08 17:00:00 | 2240844 | -                   | done         | 23 ------------------------------------------------------------------------------------ 2021-06-08 12:00:00 | 2240905 | -                   | start          | 167 ------------------------------------------------------------------------------------ 2021-06-15 10:00:00 | 2240844 | -                   | start          | 102 ------------------------------------------------------------------------------------ 2021-06-15 10:00:00 | 2240905 | 1066899 | allocated | 23 ------------------------------------------------------------------------------------ 2021-06-15 09:00:00 | 2240844 | 1055788 | allocated | 23 for each id, i need to find job "start" to have user_id and _time, but i also need order_id, how can i do this? I need something like this: _time                                 | id                 | order_id   | job             | user_id ------------------------------------------------------------------------------------ 2021-06-08 12:00:00 | 2240905 | 1066899 | start          | 167 ------------------------------------------------------------------------------------ 2021-06-15 10:00:00 | 2240844 | 1055788 | start          | 102 ------------------------------------------------------------------------------------ Thanks
Hi all, I have a lookup and I'd like to filter based on tokenized value. The lookup dropdown also sets a different token based on selection. This would normally be a simple task, but I've been ask... See more...
Hi all, I have a lookup and I'd like to filter based on tokenized value. The lookup dropdown also sets a different token based on selection. This would normally be a simple task, but I've been asked to have the lookup pre-filtered based on who is using the app. Each item in the dropdown represents a different user.  The lookup: | inputlookup $tokLookup$ | fields field_description, field | dedup field,field_description field for label = field_description field for value = field The pseudo code of what I'd like to do is simple: | inputlookup $tokLookup$ | where field="$tokUserRole$" | fields field_description, field | dedup field,field_description Is this possible within the constraints, such that I'm only producing the single value from the lookup corresponding to the user?
Hi, I have events with below format, 08/09/202109:27:00 +0000, search_name=sre_slo_BE_module_priority_monthly, search_now=1628501220.000, info_max_time=1628501220.000, info_search_time=1628501221.6... See more...
Hi, I have events with below format, 08/09/202109:27:00 +0000, search_name=sre_slo_BE_module_priority_monthly, search_now=1628501220.000, info_max_time=1628501220.000, info_search_time=1628501221.635, Module=InvoiceManagement, Priority=P4, LastViolationMonth="Jul-2021", MissedCount=1, LastViolationp90ResponseTime (s)="30.44", Deviation (%)="1.5" I would like to plot a graph which would have Y axis with values as P1, P2, P3, P4 & x axis with Month. I tried using below query to plot a graph by assigning values to the Priorities. But again, the Y axis becomes based on the Values assigned to the Priority rather than Priority itself.   index=summary source=sre_slo_BE_module_priority_monthly Module="ControlCenter" | eval convert_epoch = strftime(_time,"%m-%d-%Y") | eval prevMonth=lower(strftime(relative_time(_time,"-1mon@d"),"%B")) | eval MonthYear = prevMonth + "-" + date_year | search convert_epoch!="08-01-2021" | eval PriorityValue = case(Priority="P1", 4, Priority="P2", 3, Priority="P3", 2, Priority="P4", 1) | stats values(PriorityValue) as PriorityValue by MonthYear, Priority   Below is the graph which I get. Instead of PriorityValue on the Y axis, I need Priority itself. Could someone please help me out here @kamlesh_vaghela @diogofgm  Is this something which you can assist with. Appreciate if you can help on this Thanks
Hi all, not sure if deployment architecture was the right place to put this question. I need some clarification regarding search restrictions. Context: The powers that be are looking into roles to... See more...
Hi all, not sure if deployment architecture was the right place to put this question. I need some clarification regarding search restrictions. Context: The powers that be are looking into roles to enforce data segregation on a single application to serve multiple clients. This is not my area of expertise.  Question: I'm having trouble figuring out the syntax of 'Search Restrictions' in the roles section of Splunk. For each role, we limit the indexes available to that client through settings->users&authentication->roles -->indexes. For now, included is checked for each index used. There are 6 capabilities inherited from a base class, which I can list if they are relevant to the question. Through testing, I've narrowed down the problem to the restrictions. Confining the search to the indexes works and we get all the data we need through (though it's not separated by client) When I hit 'search restrictions', I've tried several combinations of syntax to evaluate a field present in the indexes that dictates which client's data we're looking at. Call it client. This is a two digit alphanumeric.  example: The format in live data:  clientArray.CLIENT=6A in summary data: CLIENT=6A Looking through the docs, it should work correctly from what I can tell. I add in my field for both like so, using the search filter SPL generator: (CLIENT::6A) OR (clientArray.CLIENT::6A) The preview gives me: index=index1 OR index=index2 OR index=index3 OR index=index4 OR index=index5 | search (CLIENT::6A) OR (clientArray.CLIENT::6A) This does not allow any data through. If I use the operator CLIENT=6A in a basic search, I'm getting back the data I need. Of course '=' is not allowed in search restrictions. Any ideas on what I'm doing wrong here?   
I am receiving "splunkd experiencing s problem" in ES. It says it might automatically improve or worsen. Thank u
Hello, I have the following SPL command :   |tstats count where index=main host IN (H1,H2) by host, _time span=1h | predict "count" as prediction algorithm=LLP holdback=168 future_timespan=240 per... See more...
Hello, I have the following SPL command :   |tstats count where index=main host IN (H1,H2) by host, _time span=1h | predict "count" as prediction algorithm=LLP holdback=168 future_timespan=240 period=168 upper90=upper90 lower90=lower90 | `forecastviz(240, 168, "count", 90)` |   It makes predictions about the count of a host It outputs an array of the type :   host | _time | count | lower90(prediction) | prediction | upper90(prediction) H1 | 2021-07-10 00:00 | 6170671 | 2494994.26372 | 6170671.0 | 9846347.73628 H1 | 2021-07-10 01:00 | 6231397 | 2456899.6988 | 6231397.0 | 10005894.3012 . . . H2 | 2021-07-10 05:00 | 5216984 | 1722288.55477 | 5216984.0 | 8711679.44523 H2 | 2021-07-10 06:00 | 5297360 | 1979214.14187 | 5297360.0 | 8615505.85813 . . .   I would like to calculate linear regressions for each host in this table with the MLTK macro   `regressionstatistics("count", prediction)`   to outpuy an array of type :   host | rSquared | RMSE H1 | 0.8042 | 1195199.83 H2 | 0.7842 | 1126684.87   I can't do it. Could you help me? Thanks in advance Sincerely M.
I have two different datacenter . hostA and hostB are like datacenters and 1,2,3.... are hosts. hostA-1, hostA-2, hostA-3, hostA-4, hostA-5 . hostB-5, hostB-6, hostB-7, hostB-8.  and wanted to check ... See more...
I have two different datacenter . hostA and hostB are like datacenters and 1,2,3.... are hosts. hostA-1, hostA-2, hostA-3, hostA-4, hostA-5 . hostB-5, hostB-6, hostB-7, hostB-8.  and wanted to check side by side to those datacenters and only get the token value that matches. here is the sample log: 2021-08-05 19:01:59.677 INFO RestTemplate: {"logType":"STANDARD","message":"==========================request log================================================", "Method":"POST","Headers":"{Accept=[application/json], Content-Type=[application/json], Authorization=[Bearer eyJhQM8DMG8bEtCIsiZ0GjyYWxwt3ny1Q], Token=[basd23123], "Request body": {"accountNumber":824534875389475}}} hostA = 1 source = a.log sourcetype = a_log 2021-08-05 19:01:59.687 INFO RestTemplate: {"logType":"STANDARD","message":"==========================request log================================================", "Method":"POST","Headers":"{Accept=[application/json], Content-Type=[application/json], Authorization=[Bearer eyJhQM8DMG8bEtCIsiZ0GjyYWxwt3ny1Q], Token=[basd23123], "Request body": {"accountNumber":824534875389475}}} hostb = 6 source = a.log sourcetype = a_log if the Authorization matches on both hostA and hostB then only the matched are needed.  eg  hostA                                hostB                                              result asd132c                          asd132c                                     matched
Hi all  :), I am trying to use Splunk rest API using postman. when I try to make a request on port 8089 I am getting a "COULD NOT GET ANY RESPONSE". the url is the right one and the host is listen... See more...
Hi all  :), I am trying to use Splunk rest API using postman. when I try to make a request on port 8089 I am getting a "COULD NOT GET ANY RESPONSE". the url is the right one and the host is listening to the port  I cant get a response using curl and python either the web UI is working properly (port 8000) and sending data using HEC too (port 8089) 
Hello team, I want to forward Opentelemetry collector logs to Splunk. I'm not referring to sending application logs to Splunk using Splunk HEC exporter. When you have logging exporter configured on y... See more...
Hello team, I want to forward Opentelemetry collector logs to Splunk. I'm not referring to sending application logs to Splunk using Splunk HEC exporter. When you have logging exporter configured on your collector, your terminal shows collector logs such as   2021-08-09T12:55:28.110Z info healthcheck/handler.go:128 Health Check state change {"component_kind": "extension", "component_type": "health_check", "component_name": "health_check", "status": "ready"} 2021-08-09T12:55:28.110Z info service/service.go:267 Everything is ready. Begin running and processing data.   when the collector is ready or   2021-08-09T12:55:33.511Z info exporterhelper/queued_retry.go:276 Exporting failed. Will retry the request after interval. {"component_kind": "exporter", "component_type": "otlp", "component_name": "otlp", "error": "failed to push log data via OTLP exporter: rpc error: code = DeadlineExceeded desc = context deadline exceeded", "interval": "4.121551979s"}   when the collector fails to export data. I want to send these collector logs to Splunk. Is there a way to configure the Splunk HEC exporter to capture these collector logs and send it to Splunk?
We have 3 clustered indexers and an original Search Head. Installed an app that has a custom props.conf on the Search Head, and it is NOT showing the extracted proper fields when performing searches.... See more...
We have 3 clustered indexers and an original Search Head. Installed an app that has a custom props.conf on the Search Head, and it is NOT showing the extracted proper fields when performing searches. Deployed a new Search Head and installed the same exact app. The new Search Head shows the proper fields. The two servers appear to be identical, and running: splunk cmd btool props list --debug ...shows the same exact results, line by line, for the app. The original server does have some extra apps, but with the results of the btool above, it would not appear there are any conflicts with other apps. What would be the next steps in troubleshooting the original Search Head, and why it does not show the proper fields?
How can i extract this: "properties": {"nextLink": null, "columns": [ {"name": "Cost", "type": "Number"}, {"name": "Date", "type": "Number"}, {"name": "Charge", "type": "String"}, {"name": "Pub... See more...
How can i extract this: "properties": {"nextLink": null, "columns": [ {"name": "Cost", "type": "Number"}, {"name": "Date", "type": "Number"}, {"name": "Charge", "type": "String"}, {"name": "Publisher", "type": "String"}, {"name": "Resource", "type": "String"}, {"name": "Resource", "type": "String"}, {"name": "Service", "type": "String"}, {"name": "Standard", "type": "String"}, "rows": [ [2.06, 20210807, "usage", "uuuu", "hhh", "gd", "bandwidth", "azy", "HHH"], [2.206, 20210807, "usage", "uuuhhh", "ggg", "gd", "bandwidth", "new", "YYY"] ] No of columns can be increased.    
Hello, After upgrading to Splunk 8 from Splunk 6, it seems that the "show_source" view  ( used in "Event actions" -> "Show source" ) isn't wrapping the long lines as it used to do before. We've... See more...
Hello, After upgrading to Splunk 8 from Splunk 6, it seems that the "show_source" view  ( used in "Event actions" -> "Show source" ) isn't wrapping the long lines as it used to do before. We've isolated the new setting in jsx code in: /opt/splunk/share/splunk/search_mrsparkle/exposed/js/views/show_source/index.jsx   tableRow: { border: 0, margin: 0, padding: 0, whiteSpace: 'nowrap', },​ Before the setting was set to 'white-space: pre-wrap' as we can see in our backups of search_mrsparkle from splunk6. Is there any way for us to go the same behavior as before so that the long lines in "Show source" are wrapped ?  
Hi guys,  Currently building my own lab in docker where each instance is mapped to a different host port using -P with docker run Whenever I attempt to set anything in < targetUri => within m... See more...
Hi guys,  Currently building my own lab in docker where each instance is mapped to a different host port using -P with docker run Whenever I attempt to set anything in < targetUri => within my Deployment Client , the DC never ever phones home. Variations of my DC.conf: Deployment server Container name solely:       [deployment-client] phoneHomeIntervalInSecs = 60 disabled = false [target-broker:deploymentServer] targetUri = DeploymentServer         Deployment server ContaierName w/mgmt port:        [deployment-client] phoneHomeIntervalInSecs = 60 disabled = false [target-broker:deploymentServer] targetUri = DeploymentServer:8089     Deployment Server Container IP w/mgmt port         [deployment-client] phoneHomeIntervalInSecs = 60 disabled = false [target-broker:deploymentServer] targetUri = 172.19.0.2:8089       IP grabbed with       docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' DeploymentServer       Other info:  I have an app in deployment-apps on my DS Internal logs from my indexer is as follows from my UF1 when initiating with IP and port I can ping the IPs when on each machine  I turned off windows firewall to ensure it wasn't being blocked, nothing changed Even when setting the DS within -e on the UF container it does not work Internal logs from uf01: - Phonehome thread started - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected - Handshake done   docker ps output:  IDX1 4c5998526d1b splunk/splunk:latest "/sbin/entrypoint.sh…" 3 hours ago Up About an hour (healthy) 0.0.0.0:49169->8000/tcp, :::49169->8000/tcp, 0.0.0.0:49168->8065/tcp, :::49168->8065/tcp, 0.0.0.0:49167->8088/tcp, :::49167->8088/tcp, 0.0.0.0:49166->8089/tcp, :::49166->8089/tcp, 0.0.0.0:49165->8191/tcp, :::49165->8191/tcp, 0.0.0.0:49164->9887/tcp, :::49164->9887/tcp, 0.0.0.0:49163->9997/tcp, :::49163->9997/tcp idx1 DS 843454a553ec splunk/splunk:latest "/sbin/entrypoint.sh…" 3 hours ago Up About an hour (healthy) 0.0.0.0:49159->8000/tcp, :::49159->8000/tcp, 0.0.0.0:49158->8065/tcp, :::49158->8065/tcp, 0.0.0.0:49157->8088/tcp, :::49157->8088/tcp, 0.0.0.0:49156->8089/tcp, :::49156->8089/tcp, 0.0.0.0:49155->8191/tcp, :::49155->8191/tcp, 0.0.0.0:49154->9887/tcp, :::49154->9887/tcp, 0.0.0.0:49153->9997/tcp, :::49153->9997/tcp DeploymentServer UF01 923c0bc20fb3 splunk/universalforwarder:latest "/sbin/entrypoint.sh…" 3 hours ago Up About an hour (healthy) 0.0.0.0:49162->8088/tcp, :::49162->8088/tcp, 0.0.0.0:49161->8089/tcp, :::49161->8089/tcp, 0.0.0.0:49160->9997/tcp, :::49160->9997/tcp its definitely confusing. Anyone able to help me out? 
From where can I download Splunk 6.6.2 (build 4b804538c686). I can see from the portal that the oldest I can download is version 7.1.1 Need this for specific testing purposes