All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is it possible to use data models from Common Information Model to use cases in splunk, if so, how can we do that 
Hi, I have difficulty to break a json into multiple events. Here is my log : (appear in one event, instead of 2)   { "InstanceInformationList": [ { "Version": false, ... See more...
Hi, I have difficulty to break a json into multiple events. Here is my log : (appear in one event, instead of 2)   { "InstanceInformationList": [ { "Version": false, "PlatformName": "Amazon Linux", "ComputerName": "ip-10-170-216-17.eu-east-1.compute.internal" }, { "PlatformType": "Linux", "IPAddress": "10.170.216.18", "AssociationOverview": { "DetailedStatus": "Failed", "InstanceAssociationStatusAggregatedCount": { "Failed": 1, "Success": 1 } }, "AssociationStatus": "Failed", "PlatformVersion": "2", "ComputerName": "ip-10-170-216-18.eu-east-1.compute.internal", "InstanceId": "i-00000000001", "PlatformName": "Amazon Linux" } ] }      And you can find my props.conf below :   [my_test] SHOULD_LINEMERGE = false INDEXED_EXTRACTIONS = json DATETIME_CONFIG = CURRENT TRUNCATE = 999999 JSON_TRIM_BRACES_IN_ARRAY_NAMES = true BREAK_ONLY_BEFORE = (\[\s+\{) MUST_BREAK_AFTER = (\},|\}\s+\]) SEDCMD-remove_header = s/(\{\s+.+?\[)//g SEDCMD-remove_footer = s/\]\s+\}//g       Can you help me to find the write parsing please ? Thank you.
I created a new splunk enterprise instance in which I want to connect to my already pre-existing main enterprise instance with the bulk of our data. The intention of having 2 is so I can track the he... See more...
I created a new splunk enterprise instance in which I want to connect to my already pre-existing main enterprise instance with the bulk of our data. The intention of having 2 is so I can track the heartbeat messages between each server to one another to alert when one or the other goes down. I already have the new instance connected to the old one through outputs.conf - and this gives me the ability to search for its heartbeat logs in index=_internal. However, connecting the main original instance to the new one is a different story. I have it forwarding to the new instance the same way, using outputs.conf. However, I believe that this is too much for the new instance to handle as it is a ton of data (which i don't even want to go there). Is there a way that I can have it establish the connection so I can monitor for heartbeats, but not send any data? Perhaps what settings can I tweak that disable the sending of anything but keep that connection between the two - without turning off indexing on the new instance so I am able to monitor and alert when the old instance stops sending heartbeats when it goes offline. 
I'm trying to exclude specific src_ip addresses from the results of a firewall query (example below). The query completes, however the src_ip addresses are not excluded and the following error is ret... See more...
I'm trying to exclude specific src_ip addresses from the results of a firewall query (example below). The query completes, however the src_ip addresses are not excluded and the following error is returned: [subsearch]: The lookup table 'dns_serves.csv' requires a .csv or KV store lookup definition.  Example: index=firewall | search NOT [|inputlookup dns_serves.csv | fields src_ip] | table src_ip dest_ip signature When running |inputlookup dns_servers.csv by itself the contents of the lookup are returned so I know the lookup is good. I've checked the lookup permissions, CSV encoding, and searches forum threads for a solution.   
Hi There Experts ,  In our current environment we have Splunk Integration with CA UIM monitoring tools to send Splunk alerts to CA UIM for Monitoring . While upgrading the splunk version we got to k... See more...
Hi There Experts ,  In our current environment we have Splunk Integration with CA UIM monitoring tools to send Splunk alerts to CA UIM for Monitoring . While upgrading the splunk version we got to know that Client have customized app for this integration which was on python 2 and as we are upgrading from 7 .3. to 8.1, there is issue with python compatibility .As new splunk versions supports only python 3 .  Any one has any idea on the workaround app or addon we can use from splunk base for integrating Splunk with CA UIM .  Please help   
I know how to set Values. multiselect.val(value_array); BUT:  Is there a way to set the labels to a different value (not the actual value)? For Example: i want to be able to select the country by... See more...
I know how to set Values. multiselect.val(value_array); BUT:  Is there a way to set the labels to a different value (not the actual value)? For Example: i want to be able to select the country by its name but in the search i use the country code. Something like: multiselect.label(lable_array); or: multiselect.val(value_array , label_array); I tried an array with label-value pairs but it did not work.
Hi, After the migration of our McAfee ePO server, I want to change the SQL query to reflect the changes made in the ePO database. But when I want to clic on "next" on the DataLab, the step 4 is faul... See more...
Hi, After the migration of our McAfee ePO server, I want to change the SQL query to reflect the changes made in the ePO database. But when I want to clic on "next" on the DataLab, the step 4 is faulty. The query is OK (in the DataLab, it return values), the "rising" parameters seems to be good, so I don't see what's going wrong. Thank's in advance for your help. Below the query :  SELECT [EPOEvents].[ReceivedUTC] as [timestamp], [EPOEvents].[AutoID], [EPOEvents].[ThreatName] as [signature], [EPOEvents].[ThreatType] as [threat_type], [EPOEvents].[ThreatEventID] as [signature_id], [EPOEvents].[ThreatCategory] as [category], [EPOEvents].[ThreatSeverity] as [severity_id], [EPOEventFilterDesc].[Name] as [event_description], [EPOEvents].[DetectedUTC] as [detected_timestamp], [EPOEvents].[TargetFileName] as [file_name], [EPOEvents].[AnalyzerDetectionMethod] as [detection_method], [EPOEvents].[ThreatActionTaken] as [vendor_action], CAST([EPOEvents].[ThreatHandled] as int) as [threat_handled], [EPOEvents].[TargetUserName] as [logon_user], [EPOComputerPropertiesMT].[UserName] as [user], [EPOComputerPropertiesMT].[DomainName] as [dest_nt_domain], [EPOEvents].[TargetHostName] as [dest_dns], [EPOEvents].[TargetHostName] as [dest_nt_host], [EPOComputerPropertiesMT].[IPHostName] as [fqdn], [dest_ip] = ( convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerPropertiesMT].[IPV4x] + 2147483648))),1,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerPropertiesMT].[IPV4x] + 2147483648))),2,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerPropertiesMT].[IPV4x] + 2147483648))),3,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerPropertiesMT].[IPV4x] + 2147483648))),4,1))) ), [EPOComputerPropertiesMT].[SubnetMask] as [dest_netmask], [EPOComputerPropertiesMT].[NetAddress] as [dest_mac], [EPOComputerPropertiesMT].[OSType] as [os], [EPOComputerPropertiesMT].[OSBuildNum] as [sp], [EPOComputerPropertiesMT].[OSVersion] as [os_version], [EPOComputerPropertiesMT].[OSBuildNum] as [os_build], [EPOComputerPropertiesMT].[TimeZone] as [timezone], [EPOEvents].[SourceHostName] as [src_dns], [src_ip] = ( convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),1,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),2,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),3,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),4,1))) ), [EPOEvents].[SourceMAC] as [src_mac], [EPOEvents].[SourceProcessName] as [process], [EPOEvents].[SourceURL] as [url], [EPOEvents].[SourceUserName] as [source_logon_user], [EPOComputerPropertiesMT].[IsPortable] as [is_laptop], [EPOEvents].[AnalyzerName] as [product], [EPOEvents].[AnalyzerVersion] as [product_version], [EPOEvents].[AnalyzerEngineVersion] as [engine_version], [EPOEvents].[AnalyzerDATVersion] as [dat_version], [EPOProdPropsView_VIRUSCAN].[datver] as [vse_dat_version], [EPOProdPropsView_VIRUSCAN].[enginever64] as [vse_engine64_version], [EPOProdPropsView_VIRUSCAN].[enginever] as [vse_engine_version], [EPOProdPropsView_VIRUSCAN].[hotfix] as [vse_hotfix], [EPOProdPropsView_VIRUSCAN].[productversion] as [vse_product_version], [EPOProdPropsView_VIRUSCAN].[servicepack] as [vse_sp] FROM [EPOEvents] LEFT JOIN [EPOLeafNodeMT] ON [EPOEvents].[AgentGUID] = [EPOLeafNodeMT].[AgentGUID] LEFT JOIN [EPOProdPropsView_VIRUSCAN] ON [EPOLeafNodeMT].[AutoID] = [EPOProdPropsView_VIRUSCAN].[LeafNodeID] LEFT JOIN [EPOComputerPropertiesMT] ON [EPOLeafNodeMT].[AutoID] = [EPOComputerPropertiesMT].[ParentID] LEFT JOIN [EPOEventFilterDesc] ON [EPOEvents].[ThreatEventID] = [EPOEventFilterDesc].[EventId] WHERE [EPOEvents].[AutoID] > ? AND ([EPOEventFilterDesc].[Language]='0409') ORDER BY [EPOEvents].[AutoID] ASC
Good afternoon! I have a XPRT_002_SYSAT-41777_202110020712.csv file. After some time, exactly the same XPRT_002_SYSAT-41777_202110020712.csv file appears in my directory, with exactly the same conte... See more...
Good afternoon! I have a XPRT_002_SYSAT-41777_202110020712.csv file. After some time, exactly the same XPRT_002_SYSAT-41777_202110020712.csv file appears in my directory, with exactly the same content, but with a different modification time. In this case, the system indexes all events from this file twice and I have duplicates. I know that they can be filtered by means of dedup _raw, but it is not my way because it very strongly worsens search performance. Are there any other ways to configure indexing based on file changes rather than name and size, and if they match, do not index again? Tried: crcSalt = <SOURCE> CHECK_METHOD = modtime
Hi, I would like to ask for help with following problem: We have SH cluster (3 nodes) and IDX cluster (3 nodes). We upgraded it from 8.0.9 to 8.1.6 because of EOS of 8.0 version. Everything looks fi... See more...
Hi, I would like to ask for help with following problem: We have SH cluster (3 nodes) and IDX cluster (3 nodes). We upgraded it from 8.0.9 to 8.1.6 because of EOS of 8.0 version. Everything looks fine, except one thing - sometimes this happens: I run a search. The search starts, but after a while it stucks (on the line below the place for entering the SPL query, the number of events stops) and after cca 5 minutes the search ends with an error message "Streamed search execute failed because: Error in 'lookup' command: Failed to re-open lookup file: '/srv/app/int/secmon/splunk/var/run/searchpeers/08270BDA-BE03-4A78-8C6C-95A9CE10BB8D-1633508003/kvstore_s_SA-IdeRjww0FotymhlCIaS1cqkc05a_assetsXy0Y9f6F5lMW4rOy8KLC@P22'" It happens completely randomly, does not matter what data I search for. Sometimes this message is generated by only 1 IDX node, sometimes by 2, sometimes by all 3 nodes in IDX cluster. Error message is always exactly the same (except the part "1633508003", which is time of search). Sometimes I get partial results (some events returned), sometimes not (0 events returned). Before upgrade there was no message like this. Could someone help with this? Is it related to the upgrade? And how to fix it? I tried to search through Splunk Community, google around, but did not find anything useful... Thanks in advance. Lukas Mecir
Hi how can I calculate percentage of a each ErrorCode field by servername? here is the spl: index="my_index" | rex field=source "\/log\.(?<servername>\w+)." | rex "Err\-ErrorCode\[(?<ErrorCode>\... See more...
Hi how can I calculate percentage of a each ErrorCode field by servername? here is the spl: index="my_index" | rex field=source "\/log\.(?<servername>\w+)." | rex "Err\-ErrorCode\[(?<ErrorCode>\d+)" expected output: Servername     ErrorCode      Percentage  server1             404                    50%                              500                    40%                              200                    10% server2             500                    50%                              404                    45%                              200                    5% …   any idea?  Thanks 
Hi Splunkers,   1. We are upgrading splunk version from 7.3.4 to 8.1.X. But can someone help to get the exact stable version between 8.1.X. Please assist us. Thanks, Abhijeet B.
Citry contains 12 names. in result i am able to see only city name with product if product is zero it is not showing the Citry name base search |stats count(product) AS Total BY City |fillnull va... See more...
Citry contains 12 names. in result i am able to see only city name with product if product is zero it is not showing the Citry name base search |stats count(product) AS Total BY City |fillnull value=0 City Citry Total citry1 1 citry5 50 citry10 15 expectation  Citry Total citry1 1 citry2 0 citry3 0 citry4 0 citry5 50 citry6 0 citry7 0 citry8 0 citry9 0 citry10 15 citry11 0 citry12 0
Hi i have uploaded a CSV file and would like to know if it is possible to only display the content in the file? Feature Business  Environment Bench Secs  Grade Risk Set offs Production ... See more...
Hi i have uploaded a CSV file and would like to know if it is possible to only display the content in the file? Feature Business  Environment Bench Secs  Grade Risk Set offs Production 300 10 Ops Count UAT 500 11
Hi All, I know the topic is quite extensively documented in several posts within splunk community but I could not really figure out what is best to apply in the case below. some context about ... See more...
Hi All, I know the topic is quite extensively documented in several posts within splunk community but I could not really figure out what is best to apply in the case below. some context about architecture in use We have basically 3 layers (so index) which a FE call goes through: apigee  mise (microservices) eCommerce applicaiton In other words, a call initiated by FE is routed to apigee from where it goes to some microservice which in turn might call the eCommerce application. The calls are uniquely identified by a requestId so given an apigee call I can get exactly which are the related calls to eCommerce because of the requestId. Splunk dashboard I'm building a dashboard where: panel 1: I list all apigee calls grouped by common url to get some stats out out of it (so far so good). Something like:   > oce/soe/orders/v1/*/billingProfile (normalized url), then display the columns: count, avg(duration), perc90(duration) > oce/soe/orders/v1/*/delivery-method (normalized url), then display the columns: count, avg(duration), perc90(duration) > ... panel 2: given the focus on an one apigee "normalized" call from panel one I list all related eCommerce calls. The goal is to grab some stats over such ep calls, grouping those by common urls and then average duration and taking perc90. To make this working I do use a join but it's very slow. Something like, given the focus on billingProfile apigee call above: > ecommerceapp/billinginfo/store/*/ (normalized url),  then display the columns: count, avg(duration), perc90(duration)   I want to ask if you see any other way to reach the same goal without a join or if you have any generally hint to improve the performance.  Thanks in advance, Vincenzo   I report below the index search I'm currently using highlighting the part of interest:           index=ecommerce_prod (namespace::prod-onl) (earliest="$timeRange1.earliest$" latest="$timeRange1.latest$") "Status=20*" | where isnotnull( SCS_Request_ID) | rex field=_raw "(?<operation>(?<![\w\d])(GET|POST|PUT|DELETE)(?![\w\d]))" | rex field=_raw "(?<=[/^(POST|GET|PUT|DELETE)$/] )(?<service>[\/a-zA-Z\.].+?(?=HTTP))" | rex field=_raw "(?<=Duration=)(?<duration>[0-9]*)" | eval temp=split(service,"/") | eval field1=mvindex(temp,1) | eval field2=mvindex(temp,2) | eval field3=mvindex(temp,3) | eval field4=mvindex(temp,4) | eval field4=if(like(field4,"%=%") OR like(field4,"%?%"), "*", field4) | eval field5=mvindex(temp,5)| eval field5=if(like(field5, "%=%") OR like(field5,"%?%"), "*", field5) | eval url_short=if(isnull(field5),field1."/".field2."/".field3."/".field4."/", field1."/".field2."/".field3."/".field4."/".field5) | eval fullName = operation." ".url_short | table SCS_Request_ID, operation, url_short, duration | join SCS_Request_ID [ search index="apigee" (earliest="$timeRange1.earliest$" latest="$timeRange1.latest$") status="20*" | rename tgrq_h_scs-request-id as SCS_Request_ID | table SCS_Request_ID | where isnotnull( SCS_Request_ID) ] | stats count, avg(duration) as avg_, perc90(duration) as perc90_ by operation, url_short | eval avg_=round(avg_,2) | eval perc90_=round(perc90_,2)<div> </div>          
Dear Splunk community, I am using rex to extract data from _raw and put it into new fields like so:     [10/5/21 23:02:25:134 CEST] 00000063 SystemOut O 05 Oct 2021 23:02:25:133 [INFO] [CRONS... See more...
Dear Splunk community, I am using rex to extract data from _raw and put it into new fields like so:     [10/5/21 23:02:25:134 CEST] 00000063 SystemOut O 05 Oct 2021 23:02:25:133 [INFO] [CRONSERVER] [CID-MXSCRIPT-1673979] SCRIPTNAME - 00 - Function:httpDiscovery(POST, https, host, /call, BASE64ENC(USER:PASSWORD)) Profile = MYPROFILE - Scope = MYHOSTNAME - End - Result(strResponseStatus, stResponseReason, strResponseData)=([200], [OK], [{"message":"SUCCESS"}{"runId":"2021100523022485"} ]) | rex field=_raw "Scope = (?<fqdn>\S*)" | rex field=_raw "Profile = (?<profile>\S*)"     This will create new fields and also show _raw. I don't want _raw to show, but if I use this:     | table _time     Instead of this:     | table _time, _raw,     The fields that I create will no longer show, so I have to include _raw aswell. I can use mode=sed when using rex to delete data from _raw and for example only keep profile and then rename _raw to profile, but I don't have any experience using sed and I would prefer a easier way. My question: Is it possible to hide _raw and still use rex on _raw to create new fields?   Thanks.  
Hi All, I am trying to merge  the rows of a column into one row for the below table: App_Name Country Last_Deployed Temp_Version com.citiao.cimainproject China 2021-09-24 13:30:04.39 1.0.12.... See more...
Hi All, I am trying to merge  the rows of a column into one row for the below table: App_Name Country Last_Deployed Temp_Version com.citiao.cimainproject China 2021-09-24 13:30:04.39 1.0.12.20210907193849359 com.citiao.cimainproject HongKong 2021-09-24 11:48:15.176 1.0.12.20210907193849359 com.citiao.cimainproject Indonesia 2021-09-10 13:17:38.254 1.0.12.20210907193849359 com.citiao.cimainproject Malaysia 2021-09-10 14:54:54.098 1.0.12.20210907193849359 com.citiao.cimainproject Philippines 2021-09-24 11:58:44.034 1.0.12.20210907193849359 com.citiao.cimainproject Singapore 2021-09-10 12:53:25.539 1.0.12.20210907193849359 com.citiao.cimainproject Thailand 2021-09-24 14:01:09.682 1.0.12.20210907193849359 com.citiao.cimainproject Vietnam 2021-09-10 15:00:06.598 1.0.12.20210907193849359   I used the query as below: my query | stats values(App_Temp_Name) as App_Name latest(LAST_DEPLOYED) as Last_Deployed latest(APP_TEMP_VER) as Temp_Version by Country | table App_Name,Country,Last_Deployed,Temp_Version But I need to merge the rows of the column App_Name as one row keeping others as it is like: App_Name Country Last_Deployed Temp_Version com.citiao.cimainproject China 2021-09-24 13:30:04.39 1.0.12.20210907193849359   HongKong 2021-09-24 11:48:15.176 1.0.12.20210907193849359   Indonesia 2021-09-10 13:17:38.254 1.0.12.20210907193849359   Malaysia 2021-09-10 14:54:54.098 1.0.12.20210907193849359   Philippines 2021-09-24 11:58:44.034 1.0.12.20210907193849359   Singapore 2021-09-10 12:53:25.539 1.0.12.20210907193849359   Thailand 2021-09-24 14:01:09.682 1.0.12.20210907193849359   Vietnam 2021-09-10 15:00:06.598 1.0.12.20210907193849359 Please help me modify the query to get the desired output.   Thank you very much..!!
Hi ,  I am trying to get the day wise error count by data message only if the yesterdays error count is more than 50 . index="eshop" NOT(index=k8*dev OR index=k8*test) tag=error | eval time=strfti... See more...
Hi ,  I am trying to get the day wise error count by data message only if the yesterdays error count is more than 50 . index="eshop" NOT(index=k8*dev OR index=k8*test) tag=error | eval time=strftime(_time,"%Y-%m-%d") |table time,data.message <condition if the previous day data message count is less than 50 then it should be ignored from the stats> | stats count by time,data.message
I am looking for O365 use cases related to MS teams, Sharepoint, Exchange , One drive, Currently data is populate in Azure and need to ingest use cases in Splunk.   What kind of use cases i can cre... See more...
I am looking for O365 use cases related to MS teams, Sharepoint, Exchange , One drive, Currently data is populate in Azure and need to ingest use cases in Splunk.   What kind of use cases i can create based on these data sources MS teams, Sharepoint, Exchange , One drive. Also looking for Malicious and threat level of  O365 use cases .   Please suggest
Hi i'm looking to use a heavy forwarder to append a string to specific log messages. Im following the guide here https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Anonymizedata (specifically th... See more...
Hi i'm looking to use a heavy forwarder to append a string to specific log messages. Im following the guide here https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Anonymizedata (specifically the "Anonymize data with a regular expression transform" part)which only seems to mask data, i dont want to alter the log entry as such but rather add something like "<Review Required>" to the end of the log that matches a specific regex. Can this be done using the heavy forwarder and transforms.conf?
Hello, I don't find solution here and I managed to get it  to work. First of all, if you want separate in many dashboards your seach you can do that. index="_internal" | timechart count by sourcet... See more...
Hello, I don't find solution here and I managed to get it  to work. First of all, if you want separate in many dashboards your seach you can do that. index="_internal" | timechart count by sourcetype You can activate trellis by sourcetype. But in each graph you want status (by exemple). Please try this query : index="_internal" | bin _time | stats count by _time, sourcetype, status | eval {status}=count | fields - status, count | fillnull value=0 | stats sum(*) as * by _time, sourcetype