All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi at all, I have the following problem: We configured SSO with OKTA using SAML. When authenticating we receive from Splunk the following error message "Saml response does not contain group info... See more...
Hi at all, I have the following problem: We configured SSO with OKTA using SAML. When authenticating we receive from Splunk the following error message "Saml response does not contain group information".
Here I have 3 fields "Status", merchantID & count. I am trying to find out the percentage of "CONFIRMED" and "REJECTED (these are values of "Status" for each merchantID. I mean calculation would be (... See more...
Here I have 3 fields "Status", merchantID & count. I am trying to find out the percentage of "CONFIRMED" and "REJECTED (these are values of "Status" for each merchantID. I mean calculation would be ((REJECTED-CONFIRMED)/CONFIRMED)*100, but this should be at a merchantID level. I am kind of new in Splunk and stuck. I could only come up with the below index=apps sourcetype="pos-generic:prod" Received request to change status CONFIRMED OR REJECTED partner_account_name="Level Up" | stats count by status, merchantId
I'm trying to understand the upgrade process for a java agent from 4.4.3 to 4.5.15. I found this documentation, which makes it look pretty straight forward. Backup the existing install, copy down the... See more...
I'm trying to understand the upgrade process for a java agent from 4.4.3 to 4.5.15. I found this documentation, which makes it look pretty straight forward. Backup the existing install, copy down the new binaries, and then copy the config files from the old to the new. Is that it? Do we need to worry about data corruption or anything on the controller side by just switching agent versions? 
Our Splunk cluster has no Internet connection by policy. Any idea how to at least semi automate update checks for splunkbase apps? thx afx
I have an existing search that finds fields named "RunDate" "StartTime" "EndTime" stored as part of test run summaries. The search then proceeds to convert those time values into usable Unix, via str... See more...
I have an existing search that finds fields named "RunDate" "StartTime" "EndTime" stored as part of test run summaries. The search then proceeds to convert those time values into usable Unix, via strptime: index="IDX1" sourcetype="SRC" ProjectName="PRJ" | eval stime = strptime(StartTime,"%m/%d/%Y %I:%M:%S %p") | eval etime = strptime(EndTime,"%m/%d/%Y %I:%M:%S %p") | table RunDate stime etime | sort RunDate desc Now is the tricky part... I would like a 4th column that uses the time frame in each row to perform a calculation on values coming from a different index/source. index="IDX2" "HOST" "data.metricId" IN (1234) | stats avg("data.metricValues{}.value") as average | eval total=average/100 Somehow, this needs to be time constrained by "earliest=stime" & "latest=etime" for each RunDate (the results should be a series) Is this possible? To run a secondary search/eval, using calculated values from the primary search as the earliest and latest time constraints? I attempted to do this with a maps search, but it seems that for a maps search to work properly, there must be an overlapping field. In this case, the only thing that overlaps between the two searches are the time parameters.
I'm trying to upload dSYM file from the UI https://mint.splunk.com/dashboard/project/XXX/settings/dsyms but getting an error: "Access to XMLHttpRequest at 'https://ios.splkmobile.com/api/v1/dsyms... See more...
I'm trying to upload dSYM file from the UI https://mint.splunk.com/dashboard/project/XXX/settings/dsyms but getting an error: "Access to XMLHttpRequest at 'https://ios.splkmobile.com/api/v1/dsyms/upload' from origin 'https://mint.splunk.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource." Also, the list of dSYM shows only 4 dSYM files from 2017, but we upload a new one together with the build every month. I'm using Fastlane to upload the dSYM file more details: https://github.com/fastlane/fastlane/blob/master/fastlane/lib/fastlane/actions/splunkmint.rb Looks like Fastlane uses the same API(https://ios.splkmobile.com/api/v1/dsyms/upload) than the Splunk Mint UI. From the logs, Fastlane uploaded the file successfully, but the file does not appear on that dSYM list.
Hi Please give me any feedback . ideas as to whether I am following the best action. I have a database table that is occasionally updated / add to. I would like to start using this information i... See more...
Hi Please give me any feedback . ideas as to whether I am following the best action. I have a database table that is occasionally updated / add to. I would like to start using this information in searches as a lookup. What is the best action to take here? I had thought of running a search and outputting he data to a KVstore lookup I have tried this but as any record in the table could be updated I am not clear on how to use the Key_field / _key value to pick up the updates. I have also seen examples using a csv lookup and using joins to merge the old / new data then writing out a new file. Which method is best for picking up changes that may occur in any field from the database. The records do have a fixed identity field which may help. Anyone able to recommend best method with an example?
I got a custom-crafted JSON file that holds a mix of data types within. I'm a newbie with Splunk administration so bear with me. This is a valid JSON, as far as I understand I need to define a new... See more...
I got a custom-crafted JSON file that holds a mix of data types within. I'm a newbie with Splunk administration so bear with me. This is a valid JSON, as far as I understand I need to define a new link break definition with regex to help Splunk parse and index this data correctly with all fields. I minified the file and uploaded it after verifying that my regex actually match: Can you assist what could be a good regex definition? Below is a snippet from file I want to parse; there should be 2 events in there: {"data":[{"serial":[0],"_score":null,"_type":"winevtx","_index":"xxx","_id":"xxx","_source":{"process_id":48,"message":"","provider_guid":"xxx","log_name":"Security","source_name":"Microsoft-Windows-Security-Auditing","event_data":{"TicketOptions":"xxx","TargetUserName":"xxx","ServiceName":"krbtgt","IpAddress":"::ffff:","TargetDomainName":"xxx","IpPort":"53782","TicketEncryptionType":"0x12","LogonGuid":"xxx","TransmittedServices":"-","Status":"0x0","ServiceSid":"xxx"},"beat":{"name":"xxx","version":"5.2.2","hostname":"xxx"},"thread_id":1016,"@version":"1","@metadata":{"index_local_timestamp":"2019-07-20T06:27:21.23323","hostname":"xxxDC","index_utc_timestamp":"2019-07-20T06:27:21.23323","timezone":"UTC+0000"},"opcode":"Info","@timestamp":"2019-07-20T06:25:33.801Z","tags":["beats_input_codec_plain_applied"],"type":"wineventlog","computer_name":"xxx","event_id":4769,"record_number":"198","level":"Information","keywords":["Audit Success"],"host":"xxx","task":"Kerberos Service Ticket Operations"}},{"serial":[1],"_score":null,"_type":"winevtx","_index":"xxx-xxx","_id":"==","_source":{"event_data":{"SubjectDomainName":"-","LogonType":"3","LogonGuid":"{xxx}","SubjectUserSid":"S-1-0-0","LogonProcessName":"Kerberos","TargetDomainName":"xxx","AuthenticationPackageName":"Kerberos","ProcessName":"-","SubjectLogonId":"0x0","TargetUserName":"xxx","ProcessId":"0x0","TargetLogonId":"","IpAddress":"::1","LmPackageName":"-","ImpersonationLevel":"%%1833","IpPort":"0","SubjectUserName":"-","TargetUserSid":"S-1-5-18","KeyLength":"0","TransmittedServices":"-"},"provider_guid":"{xxx}","beat":{"name":"xxx","version":"5.2.2","hostname":"xxx"},"@metadata":{"index_local_timestamp":"2019-07-20T06:34:21.23323","hostname":"xxx","index_utc_timestamp":"2019-07-20T06:34:21.23323","timezone":"UTC+0000"},"opcode":"Info","@timestamp":"2019-07-20T06:33:40.262Z","thread_id":52,"event_id":4624,"record_number":"123","level":"Information","log_name":"Security","source_name":"Microsoft-Windows-Security-Auditing","@version":"1","process_id":48,"host":"xxx","type":"wineventlog","computer_name":"xxx","version":1,"tags":["beats_input_codec_plain_applied"],"keywords":["Audit Success"],"task":"Logon","message":""}}]} Berry
Hi, I'm trying to use AppDynamics to monitor an application session counts based on a specific program that the application is running. I also need this to monitor the resource utilization of th... See more...
Hi, I'm trying to use AppDynamics to monitor an application session counts based on a specific program that the application is running. I also need this to monitor the resource utilization of the database when this occurs. I would like it if I could get results every minute, then for it to report immediately when a specific percentage/count of resources have been reached, and then report every 30 minutes later if the occurrence is still happening. This will be somewhat customized since I'm pulling a specific program. I haven't found a good document on how to implement customized SQL and reporting the results. I'm also not sure if session counts for the program could/should be pulled off the application-vs-the database.  Here is an idea I want to monitor and at the count level I want it to report. select username, osuser, machine, count(*) from v$session where program like '%w3wp%' group by username, osuser, machine having count(*) > 100;   select resource_name, current_utilization, max_utilization, limit_value from v$resource_limit where resource_name in ('sessions', 'processes') and current_utilization > 530;
My indexer cluster is down except for 1 out of 6. 8089 is suddenly not working for indexers and CM<>indexer comms and i get the below error messages. Its a multi site indexer cluste. I have ran telne... See more...
My indexer cluster is down except for 1 out of 6. 8089 is suddenly not working for indexers and CM<>indexer comms and i get the below error messages. Its a multi site indexer cluste. I have ran telnet and curl commands on 8089 & indexers but still unable to connect to all but 1/6 indexers. Also, deployment server is not accessible. CM is unable to connect to 8089 for the indexers, the indexers cannot talk to each other on port 8089 either and the DS is not able to connect to my indexers at 9996. FYI custom SSL is enabled at 8089 but i don't see as the cause for this connectivity issue. I have checked with networking team who are saying its an application issue and not iptables/routing issue on the server like i suspected. Please help. IDX: 02-10-2020 03:19:20.324 +0000 WARN CMSlave - Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=myCM:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=500 expected_response_code=2xx status_line="Internal Server Error" socket_error="No error" remote_error=Cannot add peer=myidx mgmtport=8089 (reason: http client error=No route to host, while trying to reach https://myidx:8089/services/cluster/config). [ event=addPeer status=retrying AddPeerRequest: { _id= active_bundle_id=EF3B7708025567663732F8D6B146A83 add_type=Clear-Masks-And-ReAdd base_generation_id=2063 batch_serialno=1 batch_size=1 forwarderdata_rcv_port=9996 forwarderdata_use_ssl=1 last_complete_generation_id=0 latest_bundle_id=EF3B77080255676637732F8D6B146A83 mgmt_port=8089 name=EEC311D7-7778-44FA-B31D-E66672C1D568 register_forwarder_address= register_replication_address= register_search_address= replication_port=9100 replication_use_ssl=0 replications= server_name=myidx site=site3 splunk_version=7.2.6 splunkd_build_number=c0bf0f679ce9 status=Up } ]. CM: 02-07-2020 18:00:41.497 +0000 WARN CMMaster - event=heartbeat guid=BDD6A029-2082-48ED-96F3-21BD624D94CD msg='signaling Clear-Masks-And-ReAdd' (unknown peer and master initialized=1 02-07-2020 18:00:41.911 +0000 WARN TcpOutputFd - Connect to myidx:9996 failed. No route to host 02-07-2020 18:00:41.912 +0000 WARN TcpOutputProc - Applying quarantine to ip=myidx port=9996 _numberOfFailures=2 02-07-2020 18:00:42.013 +0000 WARN TcpOutputFd - Connect to myidx:9996 failed. No route to host 02-07-2020 18:00:42.323 +0000 WARN CMMaster - event=heartbeat guid=44AF1666-AB56-4CC1-8F01-842AD327CF79 msg='signaling Clear-Masks-And-ReAdd' (unknown peer and master initialized=1 02-07-2020 10:36:54.650 +0000 WARN CMRepJob - _rc=0 statusCode=502 transErr="No route to host" peerErr="" 02-07-2020 10:36:54.650 +0000 WARN CMRepJob - _rc=0 statusCode=502 transErr="No route to host" peerErr="" DS trying to connect to indexers: 02-07-2020 11:56:12.097 +0000 WARN TcpOutputFd - Connect to idx2:9996 failed. No route to host 02-07-2020 11:56:12.098 +0000 WARN TcpOutputFd - Connect to idx3:9996 failed. No route to host 02-07-2020 11:56:13.804 +0000 WARN TcpOutputFd - Connect to idx1:9996 failed. No route to host  
I'm curious if this add-on will work with the Github SAAS solution. it looks like it's been awhile since it's been updated so just curious. If not, do you know of an add-on that does?
Hi, thanks to the wonderful website_monitoring app, I see some interesting but unexplained tidbits. We have two indexers with HEC configurued. Because of project delays those HEC inputs are idle.... See more...
Hi, thanks to the wonderful website_monitoring app, I see some interesting but unexplained tidbits. We have two indexers with HEC configurued. Because of project delays those HEC inputs are idle. I use https://splunk-index1:8088/services/collector/health for the query in website_monitoring. And at least onece a day I do get a 5 second response time on one of the indexers, not the other. Usually this is less than 20ms. Checking _index/_audit for anything happening in parallel, I found nothing so far that would explain this monster increase. It is not linked to specific times. If I only use the port, the peak times are just up t0 60ms worst case. But that gives me an ugly 404 error, so I figured I might as well use a decent endpoint. Any ideas? thx afx
Hello Experts, I am trying to read the text from the last square bracket (which is TestModelCompany,en_US) 21:11:31,367 INFO [TestBenuLogger] [155.56.208.68] [716057] [-] [TestModelCompany,e... See more...
Hello Experts, I am trying to read the text from the last square bracket (which is TestModelCompany,en_US) 21:11:31,367 INFO [TestBenuLogger] [155.56.208.68] [716057] [-] [TestModelCompany,en_US] No 1 XX_TimeStep="10" XX_TimeQuery="10" XX_HTTPSession="1398708550-1911P0" XX_QuerySession="null" XX_TimeStamp="2020-02-09T20:11:31.358Z-PY" XX_Company="Model Company" XX_QueryMode="STANDARD" XX_Agent="Model" Starting Model API : Mode : Standard Query Operation : QUERY Company : Model Company New Snapshot Calculation I wrote a regular expression to extract the content from last bracket, (?<=\[)[^\[\]]*(?=][^\[\]]+$) It works well. However I am unable to integrate it in the splunk, This is my existing splunk query, sourcetype=text XX_Company="*" last_modified_on index="*_test_application" | rex field=_raw "last_modified_on.*?to_datetime\('(?<lmo_date>.*?):\d\d\w\'" | eval lmo_date_converted=strptime(lmo_date,"%Y-%m-%dT%H:%M") | eval daysDiff=(_time-lmo_date_converted)/86400 | rex field=_raw "(?<name><=\[)[^\[\]]*(?=][^\[\]]+$)" | where daysDiff > 90 | stats avg(daysDiff) as "Last Modified On averege days in past", max(daysDiff) as "Max Value Of Last Modified On" by XX_Company XX_Mode | sort -"Last Modified On averege days in past" This is a working splunk query. With this, I would like to display the content from the last bracket as a column. Could you guide?
Date=2020-02-10|StrtTime=09:56:08|EndTime=09:56:08|Duration=7|EvntType=MSG|UUID= props that i am using : TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d MAX_TIMESTAMP_LOOKAHEAD = 40 LINE_BREAKER =... See more...
Date=2020-02-10|StrtTime=09:56:08|EndTime=09:56:08|Duration=7|EvntType=MSG|UUID= props that i am using : TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d MAX_TIMESTAMP_LOOKAHEAD = 40 LINE_BREAKER = Date=\d+-\d+-\d+ TRUNCATE = 9999 SHOULD_LINEMERGE = false CHARSET=UTF-8 disabled=false can i use TIME_FORMAT = %Y-%m-%d OR do i have to use TIME_FORMAT = %Y-%m-%d|StrtTime=%H:%M:%S
This what we have in logs: index="xyz" INFO certvalidationtask And this prints a JSON object which consists of a list of commonName + ExpirationDate Stage.env e401a4ee-1652-48f6-8785-e8536... See more...
This what we have in logs: index="xyz" INFO certvalidationtask And this prints a JSON object which consists of a list of commonName + ExpirationDate Stage.env e401a4ee-1652-48f6-8785-e8536524a317 [APP/PROC/WEB/0] - - 2020-02-10 16:09:01.525 INFO 22 --- [pool-1-thread-1] c.a.c.f.c.task.CertValidationTask : {commonName='tiktok.com', expirationDate='2020-05-21 17:50:20'}{commonName='instagram.com', expirationDate='2020-07-11 16:56:37'}{commonName='blahblah.com', expirationDate='2020-12-08 11:30:42'}{commonName='advantage.com', expirationDate='2020-12-10 11:41:31'}{commonName='GHGHAGHGH', expirationDate='2021-05-19 08:34:03'}{commonName='Apple Google Word Wide exercise', expirationDate='2023-02-07 15:48:47'}{commonName='some internal cert1', expirationDate='2026-06-22 13:02:27'}{commonName='Some internal cert2', expirationDate='2036-06-22 11:23:21'} I wanted a table which contains 2 columns -> Common Name & Expiration Date. Where if the expiration date is less than 30 days from the current date we show that in RED color, for less than 90 days we show in Yellow, everything else in Green. Much much thanks in Advanced.
I have the username filed extraction as follows in the props.conf which extracts the email address:- [sourcetype_X] EXTRACT-XYZ = username="(?<user>[^+\"]*)" which extracts the field as fo... See more...
I have the username filed extraction as follows in the props.conf which extracts the email address:- [sourcetype_X] EXTRACT-XYZ = username="(?<user>[^+\"]*)" which extracts the field as follows x12345@abc-def-ghij-01.com y67891@klm-def-ghij-01.com z45787@abc-def-ghij-01.com ABC-DEF Now what would be regex stanza to extract the username as follows from the above x12345 y67891 z45787 ABC-DEF
I have a field that contains: CN=Joe Smith,OU=Support,OU=Users,OU=CCA,OU=DTC,OU=ENT,DC=ent,DC=abc,DC=store,DC=corp I'd like to trim off everything after the first comma. This informatio... See more...
I have a field that contains: CN=Joe Smith,OU=Support,OU=Users,OU=CCA,OU=DTC,OU=ENT,DC=ent,DC=abc,DC=store,DC=corp I'd like to trim off everything after the first comma. This information can always be changing, so there is no set number of characters. Thanks.
Getting an XML error while trying to install Splunk Enterprise security app splunk enterprise version:8.0 splunk ES app version:6.0 error:this XML file does not have any style information ass... See more...
Getting an XML error while trying to install Splunk Enterprise security app splunk enterprise version:8.0 splunk ES app version:6.0 error:this XML file does not have any style information associated with it
I have been ingesting data from an Akamai WAF using the Akamai TA from SplunkBase. Once I have sorted all of the firewall issues and such with the team I have it working how I want it. I have the... See more...
I have been ingesting data from an Akamai WAF using the Akamai TA from SplunkBase. Once I have sorted all of the firewall issues and such with the team I have it working how I want it. I have the TA installed on the HF and Search Peers of my Index Cluster with the base stanza in default/inputs.conf set to disabled. I have then created a light weight TA which just has the inputs.conf setup with the appropriate tokens, URL's etc and have that only on the HF. The TA itself has a linux folder which contains a bash script that calls the Java app that makes the connection to the REST API. All good so far. However, when I deployed the SplunkBase TA to the Indexers, it still tries to run the Java app even though I have the inputs stanza disabled. Does Splunk run scripts in the linux folders (and I assume windows too) if it finds them? If so how do I disable them on the indexers but not on the HF? The SplunkBase TA also has props and transforms so I definitely want them on both the HF and Indexers. Hope this makes sense and any help greatly appreciated? Many thanks
I have data from a couple different sources that I am trying to combine together into coherent results. The issue I am running into is that sometimes the data does not line up perfectly. Both data so... See more...
I have data from a couple different sources that I am trying to combine together into coherent results. The issue I am running into is that sometimes the data does not line up perfectly. Both data sources will report on a user and try to list all their email aliases but sometimes they are incomplete lists and only partially overlap. So we end up with multiple rows that represent the same user but and have most of the same values for the email field, but because they are not exactly the same, when I try to group by email address it doesn't work out how I would hope. I included some example SPL below to illustrate what the data looks like. There are also some other fields in results, but those cannot be used for merging results either as the email address of the user is the only field that is in both data sets. | makeresults | eval email =split("1@example.com,2@example.com;2@example.com,3@example.com;4@example.com;5@example.com", ";") | mvexpand email | eval email=split(email, ",") | streamstats count as orig_row So I am wondering if there is any way to combine rows #1 and 2 in the example results while leaving rows 3 and 4 intact? Thanks!