All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi. I have summary index_sum, which has 2 events, 2 attributes: A1_sum, A2_sum 1590482539, 7722527 1591080961, 7722525 I have also index2, where a lot of time events are stored. The index t... See more...
Hi. I have summary index_sum, which has 2 events, 2 attributes: A1_sum, A2_sum 1590482539, 7722527 1591080961, 7722525 I have also index2, where a lot of time events are stored. The index time _time is important. I want to search the max(A1_sum) from index_sum and use this value to filter values from the index2. something like this: index2 | where _time>max(A1_sum) can you help me, please, with this problem?
I have the following case: I have 3 different indexes (A, B and C). My goal is to find what percentage of the devices found in index B could also be found in index C. In index A I have fields ass... See more...
I have the following case: I have 3 different indexes (A, B and C). My goal is to find what percentage of the devices found in index B could also be found in index C. In index A I have fields asset_name and mac_address (~1000 different devices) In index B I have field src_mac (~150 different devices) In index C I have field asset_name (~6000) different devices Basically, I firstly try to find the asset names of all 150 hosts from index B by looking into index A. And then I compare the newly found asset names to the asset names of index C. index="C" | stats count by asset_name | join type=left [search (index=A OR index=B) | eval all_macAddresses = coalesce(src_mac, mac_address) | stats values(asset_name) as asset_name values(src_mac) as src_mac values(mac_address) as mac_address by all_macAddresses | eval match = if(src_mac == mac_address, "match", "no_match") | where match="match" | table asset_name all_macAddresses] | eval new=if(isnull(all_macAddresses,"NOT_OK","OK") | stats count by new I managed to get the results by using the above search but I was wondering whether this could be achieved without using any subsearches.
I have a query that joins the data from two types of log 1st search acting on log lines like this: 2020-06-02T10:54:05,899 [431972] INFO iseries.programcall.access.ProgramCallImpl Completed Conne... See more...
I have a query that joins the data from two types of log 1st search acting on log lines like this: 2020-06-02T10:54:05,899 [431972] INFO iseries.programcall.access.ProgramCallImpl Completed ConnectionPool=BasketEnquiry, P=FII6021 RT=10, Job0=064158/QUSER/QZRCSRVS, IB=1_, IR=e13ccfe3-cc40-40d3-a262-eb63fef8b0c3, IT=87d2938f-d166-4acc-8947-fca16c1b00df, IA=P0S000529, IM=acffed94-3679-46cb-8375-263e96873ea7, AdditionalLogInfo=<<EMPTY>> 2nd search acting on log lines like this: 2020-06-02T10:56:32,621 [235270] INFO programcall.access.connection.LoggingCommandConnectionPoolDecorator ConnectionPool=Enquiry, ConnectionAction='being requested', PoolMax=30, PoolActive=0, PoolAvailable=1, PoolFree=30, PoolFreeCapacity=100, Context: P=LDI6203, IR=0b86c9ea-cce0-4cef-a3d7-9581c6f67357, IT=4d583bea-a5c1-4c92-8097-a68b53e95b84, IA=P0S000531, IM=f0af22f2-efea-44d5-ae7a-1912a3dada5b The query is index="javaprod" INFO Completed ConnectionPool=* P=* RT=* | eval KEY=ConnectionPool+":"+P | stats max(RT) as MaxResponse, count as Requests, min(ConnectionPool) as ConPool, min(P) as Pcml by KEY | table KEY, ConPool, Pcml, MaxResponse, Requests | appendcols [ search index="javaprod" INFO PoolFreeCapacity=* ConnectionPool=* | eval KEY=ConnectionPool+":"+P | stats min(PoolFreeCapacity) as MinFreeCapacity by KEY ] And sample output might be: KEY ConPool Pcml MaxResponse Requests MinFreeCapacity BasketEnquiry:FII6001 BasketEnquiry FII6001 182 129 100 BasketEnquiry:FII6010 BasketEnquiry FII6010 1908 129 100 BasketEnquiry:FII6021 BasketEnquiry FII6021 372 130 100 BasketEnquiry:GEI6000 BasketEnquiry GEI6000 673 10 100 BasketEnquiry:LDI6000 BasketEnquiry LDI6000 155 410 98 So what I'd like to do is create a summary index of these values in one index (recording the stats every minute or hour) but I don't know if using the appendcols with 2 x sistats will work. I can't find anything related to this in the existing questions or docs. Does anyone know if summary indexes are designed to work with such a query or do I need to have two separate sistats queries and join them afterwards in the standard query when I report on the aggregated results? Supplementary question: If I get it wrong, can I purge the sistats and try again?
Hi, I need to deploy Palo alto Add-On but must ensure it doesnt connect using plain text credentials. Looking at the Add-On settings, there is no option to use SSL for API calls
Hello, I have the alert that produces the table as an output, let us say that it looks as follows: SYSSID, HOST, EMAIL BWP, h1, email_list_1 BWP, h1, email_list_2 Now, I would like that ... See more...
Hello, I have the alert that produces the table as an output, let us say that it looks as follows: SYSSID, HOST, EMAIL BWP, h1, email_list_1 BWP, h1, email_list_2 Now, I would like that two separate alerts get triggered, one for the row1 and second for the row2, the idea is that they are then sent to separate email recipients. Now, what I did what to use throttling per result with the following settings for "Suppress results containing field value ": $result.SYSSID$, $result.HOST$, $result.EMAIL$ What I would expect from that is that I get two separate alerts triggered (per result) and they both get suspended for the given time (15 min). Unfortunately I am not able to get it working. What happens is that only the first row is "seen" when processing the alert and correspondingly I get only one alert triggered, which is wrong. Could you please help? Kind Regards, Kamil
Hi Splunkers, One of our Splunk Clustered Indexer (Physical Server) will go thru to a battery replacement that might take around 1-2 hours. With this, we would like to seek your advice since this o... See more...
Hi Splunkers, One of our Splunk Clustered Indexer (Physical Server) will go thru to a battery replacement that might take around 1-2 hours. With this, we would like to seek your advice since this one Splunk Clustered Indexer (Physical Server) will be down for around 1-2 hours during the battery replacement. During the Splunk Clustered Indexer (Physical Server) battery replacement, do we need to initiate maintenance-mode or run splunk offline command? We would like to hear your feedback to avoid bucket related issues. Regards, Kevin
below is few sample of how my source filename look like- source="\\abc.com\storage\Queue\Name1\abcdLogs\sample0008095200531.txt" source="\\abc.com\storage\Queue\Name1\abcdLogs\sample0008096200531.... See more...
below is few sample of how my source filename look like- source="\\abc.com\storage\Queue\Name1\abcdLogs\sample0008095200531.txt" source="\\abc.com\storage\Queue\Name1\abcdLogs\sample0008096200531.txt" Here Last 6 field before .txt represent Date. i.e. In above case 200531 is 31st May 2020. I want to extract Id which comes before Date and after sample at indextime. and In Id if 0 are at left needs to be excluded(if present) so in above two cases my Id will be 8095 and 8096 Below is my transforms.conf - [Id] SOURCE_KEY = MetaData:Source REGEX = sample0*([0-9A-Za-z]+)\d{6}.*txt FORMAT = Id::$1 WRITE_META = true fields.conf - [Id] INDEXED=true INDEXED_VALUE=source::*<VALUE>* Now when I search for ex. Id="8095" it won't return any results. but when I search Id="*8095" then it does return results. sometime I have to include wild card at start or at end to show results. Why space is getting included at start or at end of Id? My doing anything wrong? Thanks,
I am trying to alter the response Time History Chart to display a red bar when a failure occurs and not when a response time threshold is met. sourcetype="web_ping" `website_monitoring_search... See more...
I am trying to alter the response Time History Chart to display a red bar when a failure occurs and not when a response time threshold is met. sourcetype="web_ping" `website_monitoring_search_index` title="$title$" | timechart avg(total_time) as response_time | eval response_time_over_threshold=if(response_time>`response_time_threshold`,response_time,0) | eval response_time=if(response_time>`response_time_threshold`,0,response_time) TIA
Hi All, we regularly get Invalid Credential Error in splunkd all the time Even though LDAP is working correctly on a regular frequency. I don't understand why is that so ? ScopedLDAPConnection -... See more...
Hi All, we regularly get Invalid Credential Error in splunkd all the time Even though LDAP is working correctly on a regular frequency. I don't understand why is that so ? ScopedLDAPConnection - strategy=" Error binding to LDAP. reason="Invalid credentials" we have 8 errros every 15 mins. It is like a scheduled. Thing running and generating this error all the time. Can any one let me know why it is showing this error and how i can get rid of it. Thanks,
Hi team, I have security events and process events get indexed to SPLUNK instance from windows... How to get to know under which process which user is running.... That is how to integrate secur... See more...
Hi team, I have security events and process events get indexed to SPLUNK instance from windows... How to get to know under which process which user is running.... That is how to integrate security events and process events and get statistics for the user who logged in to the server...
Hi folks, We have custom certificates in our indexer cluster, search head cluster which are expired BUT replication is happening, forwarders authenticating and so on... Strange that mongo process... See more...
Hi folks, We have custom certificates in our indexer cluster, search head cluster which are expired BUT replication is happening, forwarders authenticating and so on... Strange that mongo process is not starting after upgrade from 7.2.4 to 8.0.3 Is there any dependency with the certificate but I cant relate as all others tasks are working fine, Also could you please guide on why we need mongoDB if we are not using kvstore, exspecially in indexer cluster? Any help much appreciated... Thanks, Pramodh
Hi all, any assistance with this app would be grateful. I managed to connect to our LA workspace and receive logs in splunk, but none of the logs have any extractions. any assistance would be app... See more...
Hi all, any assistance with this app would be grateful. I managed to connect to our LA workspace and receive logs in splunk, but none of the logs have any extractions. any assistance would be appreciated.
When people RDP into a server, the results I am getting into splunk is Account_Name=Sever1$ Account_Name = jdoe. When I try to display the data in a table it displays... Account_Name: Server1$... See more...
When people RDP into a server, the results I am getting into splunk is Account_Name=Sever1$ Account_Name = jdoe. When I try to display the data in a table it displays... Account_Name: Server1$ jdoe I want to remove the "Server1$" from field. One thing I will add, is this only happens sometime and not all of the time. Can there be a wildcard to remove anything before the "$".
I have a json structure that contains an object map: { "correlation_id": "f9535d13-f75b-4dd7-8c39-1e77b1559afe", "targeting_data": [ { "attribute_values": { "1013": "005", ... See more...
I have a json structure that contains an object map: { "correlation_id": "f9535d13-f75b-4dd7-8c39-1e77b1559afe", "targeting_data": [ { "attribute_values": { "1013": "005", "2056": "07", "2057": "01", "2058": "03", "2060": "02", "2065": "01", "2075": "04", "2080": "03", "2081": "01", "DMA": "803", "RECTYPE": "HD", "RECVCNT": "6", "STATE": "CA", "SVCPKGTIER": "5" }, "origin": null } ], "timestamp": "2020-06-02T00:02:09.257+00:00", "zone_target_area": "195" } How do i take the fields extracted as targeting_data{}.attribute_values.1013, targeting_data{}.attribute_values.2056 and output the field names (1013, 2056) as values. I would like for my output to be a list of the map's keys.
Hi Splunkers, I am receiving a vulnerability on all my splunk servers saying that Issue The PCI Data Security Standard requires a minimum of TLS v1.1 and recommends TLS v1.2. In addition, FIPS 1... See more...
Hi Splunkers, I am receiving a vulnerability on all my splunk servers saying that Issue The PCI Data Security Standard requires a minimum of TLS v1.1 and recommends TLS v1.2. In addition, FIPS 140-2 standard requires a minimum of TLS v1.1 and recommends TLS v1.2. Resolution Configure the server to require clients to use TLS version 1.2 using Authenticated Encryption with Associated Data (AEAD) capable ciphers for ports 2222 and 443 I tried looking into some files within splunk like outputs.conf, inputs.conf, server.conf etc https://docs.splunk.com/Documentation/Splunk/8.0.3/Security/AboutTLSencryptionandciphersuites I am unsure which files needs to be changed and is it the same process on indexes, deployment server, forwarders and search heads. Could you please suggest? Note: I tried changing the server.conf file on the search heads and restarted the splunk processes but it did not helped. I added the below line sslVersions = tls1.2 and tested it via the below command but i see still its running tlsv1 openssl s_client -connect xxxxx002:443 -tls1 Thanks, Amit
Hi, I am trying to get the top 10 table from Index-A to have corresponding asset information from Index-B as additional columns. Hostnames field in index-A is called: HostxA Hostnames field i... See more...
Hi, I am trying to get the top 10 table from Index-A to have corresponding asset information from Index-B as additional columns. Hostnames field in index-A is called: HostxA Hostnames field in index-B is called: HostxB There are some duplicate entries in both. Currently my search is able to find top 10 from Index-A and remove the dedups based on IP addresses however, I am having difficulty using the "HostxA" field "DNS" as an input to find correlating data in index-B index="indexA" HOSTSUMMARY OS="Windows Server*" | dedup IP | sort -Errors_5 | head 10 | table DNS, IP, Errors_5, Errors_4, Errors_3, Total_Errors second table: index="indexB" Hostname=DNS | table Asset-ID, Asset-Tag Resulting table DNS, IP, Errors_5, Errors_4, Errors_3, Total_Errors, Asset-ID, Asset-Tag Will appreciate some guidance. Thanks
Hi All, I have the following query with 5 source types and 2 evals in one query, common field between source types is correlationid and elapsed time which may or may not exist and using coalesce si... See more...
Hi All, I have the following query with 5 source types and 2 evals in one query, common field between source types is correlationid and elapsed time which may or may not exist and using coalesce since name formats can be different, I want to return unique correlation id in different sources and elapsedtime and return null if it does not exist, when I run the query below it is not returning any results,, what is wrong with the query below, is using 2 evals an issue ? (sourcetype=source1) OR (sourcetype=source2) OR (sourcetype=source3) OR (sourcetype=source4) OR (sourcetype=source5) | eval CorrelationId=coalesce('Properties.CorrelationId',CorrelationId,x-correlation-id,x_correlation_id ) | eval ElapsedTime = coalesce('Properties.elapsedMs','Properties.ElapsedMs','Properties.ElapsedTime',elapsedMs,elapsed) | stats values(ElapsedTime) as ElapsedTime by CorrelationId sourcetype | xyseries CorrelationId sourcetype ElapsedTime | fillnull source1 source2 source3 source4 source5 value="Not exists" | table CorrelationId source1 source2 source3 source4 source5 | sort CorrelationId
I'm requesting help constructing a regular expression for the following: I need to extract two values from the string below: [app/task/function/5] field a='app' (string after first [ before fi... See more...
I'm requesting help constructing a regular expression for the following: I need to extract two values from the string below: [app/task/function/5] field a='app' (string after first [ before first slash) field b = '5' (value after last slash before closing]) Another example is: [app/task/3] fielda = app fieldb= 3 In addition, I need the extraction to fail if a string of characters is found. For example, the character string to exclude is 'function': [function/app/2] The extraction should fail since 'function' is contained in the string. Any assistance would be appreciated. Thanks in advance.
We're approaching this from an MSSP standpoint. We're looking at having an intermediate forwarder layer where we route data in a shared layer based on the client heavy forwarders. Essentially it ... See more...
We're approaching this from an MSSP standpoint. We're looking at having an intermediate forwarder layer where we route data in a shared layer based on the client heavy forwarders. Essentially it would go Client HF -> Intermediate Forwarder Layers Route based on host -> Client Indexer Is this possible? I cant seem to get the props/transforms config to work props.conf [default] TRANSFORMS-setrouting = tsthost-routing TRANSFORMS-setrouting = abchost-routing transforms.conf [tsthost-routing] SOURCE_KEY = MetaData:Host REGEX = (hf1_tst) DEST_KEY = _TCP_ROUTING FORMAT = client-tst-lb-indexers [abchost-routing] SOURCE_KEY = MetaData:Host REGEX = (hf1_abc) DEST_KEY = _TCP_ROUTING FORMAT = client-abc-lb-indexers Im aware of potential parsing issues as well and well be evaluating that. Thanks
I am on Splunk 7.3.3 and I installed the Palo Alto TA on the SH, FH, and IDX for field parsing. The TA works but I am getting the following errors: 6 errors occurred while the search was executin... See more...
I am on Splunk 7.3.3 and I installed the Palo Alto TA on the SH, FH, and IDX for field parsing. The TA works but I am getting the following errors: 6 errors occurred while the search was executing. Therefore, search results might be incomplete Could not load lookup=LOOKUP-minemeldfeeds_dest_lookup Could not load lookup=LOOKUP-minemeldfeeds_src_lookup I only see these lookups under automatic lookups. I am using Palo Alto TA add on 6.2.0. I am not using the MineMeld Palo feature, so I am looking for a way to disable it and stop the errors. Any advice is appreciated. Thank you.