All Topics

Top

All Topics

Anybody here running Splunk Enterprise on an IBM E950 or similar IBM POWER8 or POWER9 CPU based servers with a Linux kernel?
I have a lookup with multiple columns (keys).  Some combinations make a unique match, but I need an ambiguous search on a single key to return all matched items of a particular field.  In a simplifie... See more...
I have a lookup with multiple columns (keys).  Some combinations make a unique match, but I need an ambiguous search on a single key to return all matched items of a particular field.  In a simplified form, the lookup is like this QID IP Detected 12345 127.0.0.1 2022-12-10 45678 127.0.0.1 2023-01-21 12345 127.0.0.2 2023-01-01 45678 127.0.0.2 2022-12-15 23456 ... ... QID and IP determines a unique Detected value; you can say the combination is a primary key.  No problem with search by primary key.  My requirement is to search by QID alone.  For 12345, for example, I expect the return to be multivalued (2022-12-10, 2023-01-01). If I hard code QID in an emulation, that's exactly what I get.     | makeresults | eval QID=12345 | lookup mylookup QID | table QID Detected     This will give me QID Detected 12345 2022-12-10 2023-01-01 But if use the same lookup in a search, e.g.,   index=myindex QID=12345 | stats count by QID ``` result is the same whether or not stats precedes lookup ``` | lookup mylookup QID | table QID Detected   the result is blank QID Detected 12345   The behavior can be more complex if the search returns more than one QID (e.g., QID IN (12345, 45678)).  Sometimes one of them will get Detected populated, but not others. How can I make sure multiple matches are all returned?
Hi All, I need to re-import new XML metaddata to the Splunk Cloud SAML Configuration which is generated for Azure SSO users. The current cert is valid until 19/02/2023. The issue is when I try to im... See more...
Hi All, I need to re-import new XML metaddata to the Splunk Cloud SAML Configuration which is generated for Azure SSO users. The current cert is valid until 19/02/2023. The issue is when I try to import the new xml (federationmetadata.xml) into the SAML configuration in the Splunk It constantly encounters the error “There are multiple cert,idepCertPath,idpCert.pem, must be directory" Try to remove the idpCert.pem in the ./etc/auth/idpCerts/idpCert.pem, and shows Server Error. I don't know how I can find the path ( ./etc/auth/idpCerts/idpCert.pem) in the Splunk cloud as it is not on=premises. I really need your help as the current valid will expired very soon (19/02/2023)and results in users and admins being locked out of Splunk Cloud. Any way to fix it. """urgent to solve""" Many thanks, Goli @tlam_splunk @gcusello  I would greatly appreciate it if anyone could help me!  
Unfortunately I have no control over the log data formatting... it is in format:  Field1=Value1|Field2=Value2| ... |Criteria=one,two,three,99.0|... I have one field, Criteria, that has many value... See more...
Unfortunately I have no control over the log data formatting... it is in format:  Field1=Value1|Field2=Value2| ... |Criteria=one,two,three,99.0|... I have one field, Criteria, that has many values with embedded commas. Splunk search only give me the first value... I want all values treated as one in a stats count by I tried below to rewrite them, and do see the changes, but stats still getting only first value. index=myidx  Msg=mymsg  |  rex mode=sed field=_raw "s/,/-/g" | bucket span=1d _time as ts | eval ts=strftime(ts,"%Y-%m-%d") | stats count by ts Criteria  
Hi, I am using a regex to search for a field "statusCode" which could have multiple values, i.e. "200", "400", "500", etc....  I am attempting to create an Interesting Field "statusCode" and have i... See more...
Hi, I am using a regex to search for a field "statusCode" which could have multiple values, i.e. "200", "400", "500", etc....  I am attempting to create an Interesting Field "statusCode" and have it sorted by different statusCode values. I am  trying to do perform a search using the following:     \\Sample Query index=myCoolIndex cluster_name="myCoolCluster" sourcetype=myCoolSourceType label_app=myCoolAppName ("\"statusCode\"") | rex field=_raw \"statusCode\"\s:\s\"?(?<statusCode>2\d{2}|4\d{2}|5\d{2})\"? \\Sample Log (Looks like JSON object, but its a string): "{ "correlationId" : "", "message" : "", "tracePoint" : "", "priority" : "", "category" : "", "elapsed" : 0, "locationInfo" : { "lineInFile" : "", "component" : "", "fileName" : "", "rootContainer" : "" }, "timestamp" : "", "content" : { "message" : "", "originalError" : { "statusCode" : "200", "errorPayload" : { "error" : "" } }, "standardizedError" : { "statusCode" : "400", "errorPayload" : { "errors" : [ { "error" : { "traceId" : "", "errorCode" : "", "errorDescription" : "", "errorDetails" : "" } } ] } }, "standardizedError" : { "statusCode" : "500", "errorPayload" : { "errors" : [ { "error" : { "traceId" : "", "errorCode" : "", "errorDescription" : "" "errorDetails" : "" } } ] } } }, }"     Using online regex tools and a sample output of a log I have confirmed the regEx works outside of a Splunk query.  I have also gone through numerous Splunk community threads where I have tried different permutations based on suggestions with no luck.  Any help would be appreciated.  
Hola comunidad  I'm trying to configure the props file so that the following event starts from the third line: Currently, I am testing as follows:   If I leave this setting, the tim... See more...
Hola comunidad  I'm trying to configure the props file so that the following event starts from the third line: Currently, I am testing as follows:   If I leave this setting, the timestamp of the first few lines will be taken from splunk, but it should take the timestamp of the lines with date. Regards  
Is there a way in Splunk to determine how a user arrived at a destination IP? Did they click a link from a certain webpage, or did they go there directly? Another way to look at it is if there is a... See more...
Is there a way in Splunk to determine how a user arrived at a destination IP? Did they click a link from a certain webpage, or did they go there directly? Another way to look at it is if there is a way to separate user activity from webpage activity. Websites automatically load advertisements and other content automatically within a second, or a very small time interval. Users on the other hand are scrolling, clicking on a link, then clicking on another link which takes a significantly longer amount of time. Being able to consolidate web page activity where dozens of destination addresses are accessed within 5 seconds into a single event where just the first record is shown would help to reduce the number of results returned when you're looking at a time window containing several thousand records.
It's making me crazy!!! Splunk Enterprise 8.2.6, Cluster SH with 3 members.     [role_test] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 grantableRoles = test importR... See more...
It's making me crazy!!! Splunk Enterprise 8.2.6, Cluster SH with 3 members.     [role_test] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 grantableRoles = test importRoles = user srchIndexesAllowed = *;_* srchMaxTime = 8640000     A "test" new Role. Import capabilities from "user" Role. A new user is assigner to the "test" Role.       No way to query _internal indexes!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Any suggestion??? Thanks.
Setup an app folder on my search head (clustered with indexers and HECS)  "TA-Whatever"  from the app builder. Dropped a  py script in the default folder inside TA-whatever that gens a json file that... See more...
Setup an app folder on my search head (clustered with indexers and HECS)  "TA-Whatever"  from the app builder. Dropped a  py script in the default folder inside TA-whatever that gens a json file that gets dropped in the parent app TA-whatever folder.  When going to the add data, file and directory menu, I wen thru the server file system drop downs and selected the  json file and on the 2nd page where you select the sourcetype it sees the  json data.  After I select my index, source type and all that, finish, restart Splunk, and search the index,  I see nothing, no data is there.   I have rebuilt the app several times as the Splunk user,  as root.  chmod everything to 777. rebuild source types and index's. did props conf, treid with no entry in props.  added inputs conf to the local app folder, tried with no input.conf in the ta-whatever local folder,  also added a monitor to the global inputs.conf in etc/sys/local and still no dice.   here is my input.conf that i tried in global and the local app directory: [monitor://tmp/felt.json] disabled = 0 index = googlepyscript sourcetype = googlepy   also tried:   [monitor:///tmp/felt.json] disabled = 0 index = googlepyscript sourcetype = googlepy     No matter what I do the index is not getting data. I have tried it with the build in _json sourcetype and created my own and no data goes to the index after I finish the wizard.   Any input is welcomed at this point as I have been going at it for several days. Thanks! 
I'm trying to create a search that shows a daily message count (both inbound and outobound) and the average for each direction. Although it doesn't give me any errors, when the table gets created, th... See more...
I'm trying to create a search that shows a daily message count (both inbound and outobound) and the average for each direction. Although it doesn't give me any errors, when the table gets created, the results show as zero (I know this is inaccurate as I pulled a message trace from o365 to confirm).   index=vs_email sampledesign@victoriasecret.com | eval direction=case(RecipientAddress="sampledesign@victoriasecret.com", "inbound", RecipientAddress!="sampledesign@victoriasecret.com", "outbound") | dedup MessageId | bin _time span=1d | eventstats count(direction="inbound") as inbound_count | eventstats count(direction="outbound") as outbound_count | dedup _time | eventstats avg(inbound_count) as average_inbound_count | eventstats avg(outbound_count) as average_outbound_count | table inbound_count outbound_count average_inbound_count average_outbound_count   All of the results are showing as zero. Any help would be much appreciated. Thanks!
Hi, Please, Can some one let me know what is the file and variable in "Splunk Add-on AWS" for S3, that limits the ingestion of files to 1 hour? I didn't find in inputs.conf file any variable that l... See more...
Hi, Please, Can some one let me know what is the file and variable in "Splunk Add-on AWS" for S3, that limits the ingestion of files to 1 hour? I didn't find in inputs.conf file any variable that limits the ingestion of files to 1 hour. We need to index older files from S3 bucket but "Splunk Add-on AWS" only let index the last hour. This is the inputs.conf file [aws_s3://cloud-logs] aws_account = abc aws_s3_region = us-east-1 bucket_name = f-logs character_set = auto ct_blacklist = ^$ host_name = s3.us-east-1.amazonaws.com index = cloud initial_scan_datetime = 2022-01-14T15:59:18Z max_items = 100000 max_retries = 3 polling_interval = 300 private_endpoint_enabled = 0 recursion_depth = -1 sourcetype = cloud:json disabled = 0 Regards Edgard Patino      
Am using scheduled alerts , I notice not all alerts are getting fired and am not receiving emails for all the events. Around 45-50% for alerts I get, rest of them I got get email.   Any reason wh... See more...
Am using scheduled alerts , I notice not all alerts are getting fired and am not receiving emails for all the events. Around 45-50% for alerts I get, rest of them I got get email.   Any reason why this weird behaviour ?
Good Afternoon I wanted to reach out to the community for some assistance/clarification  on the best approach to change a search head with 2 index peers to a new windows server? We currently have... See more...
Good Afternoon I wanted to reach out to the community for some assistance/clarification  on the best approach to change a search head with 2 index peers to a new windows server? We currently have one search head acting as the license master with two index peers that it is connected to.  The search head server is being decommissioned so I will need to swap out the search head and license master to the new server.  Is there a recommended best approach to this?    I found the following information: How to migrate When you migrate on *nix systems, you can extract the tar file you downloaded directly over the copied files on the new system, or use your package manager to upgrade using the downloaded package. On Windows systems, the installer updates the Splunk files automatically. Stop Splunk Enterprise services on the host from which you want to migrate. Copy the entire contents of the $SPLUNK_HOME directory from the old host to the new host. Copying this directory also copies the mongo subdirectory. Install Splunk Enterprise on the new host. Verify that the index configuration (indexes.conf) file's volume, sizing, and path settings are still valid on the new host. Start Splunk Enterprise on the new instance. Log into Splunk Enterprise with your existing credentials. After you log in, confirm that your data is intact by searching it. here: https://docs.splunk.com/Documentation/Splunk/7.2.3/Installation/MigrateaSplunkinstance?_ga=2.79344788.1455755630.1676483895-73379229.1676483895&_gl=1*xs5dk1*_ga*NzMzNzkyMjkuMTY3NjQ4Mzg5NQ..*_ga_5EPM2P39FV*MTY3NjQ4Mzg5NS4xLjEuMTY3NjQ4NDUzNi4yOC4wLjA. But this seems a little to simple.  I am unable to keep the same server name and IP address that the search head has now.  So that got me thinking this maybe simple but I just wanted to ensure I am not missing a critical step. Thank you Dan  
I have a saved report with three output fields that I want to add to a Column chart in Dashboard studio. Two of the three fields contain static values (license limit and optimal utilization threshold... See more...
I have a saved report with three output fields that I want to add to a Column chart in Dashboard studio. Two of the three fields contain static values (license limit and optimal utilization threshold) that I want to add as overlays to the third utilized SVC field. I can't seem to get the JSON correct. This is as close as I have come. How can I add two fields as overlays in a Column chart? Image attached.     { "type": "splunk.column", "dataSources": { "primary": "ds_search_1" }, "title": "SVC License Usage (today)", "options": { "yAxisAbbreviation": "off", "y2AxisAbbreviation": "off", "showRoundedY2AxisLabels": false, "legendTruncation": "ellipsisMiddle", "showY2MajorGridLines": true, "xAxisTitleVisibility": "hide", "yAxisTitleText": "SVC Usage", "overlayFields": ["optimal utilization threshold", "license limit"], "columnGrouping": "overlay" }, "context": {}, "showProgressBar": false, "showLastUpdated": false }      
I received an error stating "This saved search cannot perform summary indexing because it has a malformed search." while I was setting up summary indexing through the UI.  The SPL in my saved search... See more...
I received an error stating "This saved search cannot perform summary indexing because it has a malformed search." while I was setting up summary indexing through the UI.  The SPL in my saved search included a lookup and a subsearch to dynamically set the earliest and latest values for the main search. From what I found researching the error, the issue is related to passing the earliest and latest values back to the main search. It took me a while to solve this so I thought I'd post it here to help anyone else seeing this error.  
Hello, I have created a modular input using the example of splunk-app-example. It extends the class Script and I modified the get_scheme function adding arguments (example) key_secret = Argument(... See more...
Hello, I have created a modular input using the example of splunk-app-example. It extends the class Script and I modified the get_scheme function adding arguments (example) key_secret = Argument("key_secret") key_secret.title = "Key Secret" key_secret.data_type = Argument.data_type_string key_secret.required_on_create = True   This code allows to save key_secret as a plain string, which is clearly unsecure. Investigating I reached to the storage_password endpoints, and I added the following to stream_events method: if key_secret != "": secrets = self.service.storage_passwords storage_passwords = self.service.storage_passwords storage_password = storage_passwords.create(key_secret, key_id, tenant) input_item.update({"key_secret": ""}) else: key_secret = next(secret for secret in secrets if (secret.realm == tenant and secret.username == key_id)).clear_password This is not working, as I cannot modify the input definition, is storing both in storage_passwords and in inputs.conf. Is there any way in code to delete the inputs.conf password, or what is the correct way to manage this? Thanks!    
Hi Splunkers, I have developed the custom Add On one one server on which its working fine , when i have exported on another server, its giving the below error. i have gone through the many articles... See more...
Hi Splunkers, I have developed the custom Add On one one server on which its working fine , when i have exported on another server, its giving the below error. i have gone through the many articles on community and found its somehow relate to password.conf file. mean since its built on different server so it had encrypted password with their splunk.secret and kvstore. now please help me out how can I resolve issue on here another server and update password.conf or bind with this another server.    ERROR PersistentScript [23354 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk-add-on-for-nimsoft/bin/splunk_add_on_for_nimsoft_rh_settings.py persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last):   Thanks in advance,
Hi, I have below fields in which i need to display the count of each field value |eval TotalApps=if(match('Type'="NTB"),"1","0") |eval InProgress=if(Type= "NTB" AND isnull(date),"1","0") |eval... See more...
Hi, I have below fields in which i need to display the count of each field value |eval TotalApps=if(match('Type'="NTB"),"1","0") |eval InProgress=if(Type= "NTB" AND isnull(date),"1","0") |eval Submitted=if(Type= "NTB" AND isnotnull(date),"1","0") |eval Apps_Submitted=if(match('Myinfo_Used',"1"),'REASON_CD',"0") |stats count by Apps_Submitted getting results as COPS   1 CMS   2 FCO   3 but requirement is |stats sum(TotalApps) as TotalApps sum(InProgress) as InProgress sum(Submitted) as  Submitted (along with the AppsSubmitted count of each field value) Eg: TotalApps    10 InProgress   5 Submitted   5 AppsSubmitted  5 COPS       1 CMS         2 FCO          3
How can I created dashboard for my entities like image belong in IT Essentials work app. I download the manuel named: - Entity Integrations Manual - Overview of Splunk IT Essentials Work But ... See more...
How can I created dashboard for my entities like image belong in IT Essentials work app. I download the manuel named: - Entity Integrations Manual - Overview of Splunk IT Essentials Work But i can not find my answer Thank you for your help
Hi, I have a problem finding answers about the failure of a universal forwarder to re-ingest an XML file. 02-08-2023 11:11:40.348 +0000 INFO WatchedFile [10392 tailreader0] - Checksum for seekptr... See more...
Hi, I have a problem finding answers about the failure of a universal forwarder to re-ingest an XML file. 02-08-2023 11:11:40.348 +0000 INFO WatchedFile [10392 tailreader0] - Checksum for seekptr didn't match, will re-read entire file='ps_Z00000ldpowf9tXp9iZcoMZgvijew.log'. This is an XML file. It is created as a small file. Eventually, an application will re-write this file with a temporary name before renaming it to this same name. This can be seconds after it is created or after many minutes or even hours. My problem is that this event suggests that the forwarder knows that the file has changed but the new content of the file is not ingested. It will be ingested as expected if I manually modify the top of the file later. At that point, I see: 02-08-2023 16:21:51.439 +0000 INFO WatchedFile [10392 tailreader0] - Checksum for seekptr didn't match, will re-read entire file='ps_Z00000ldpowf9tXp9iZcoMZgvijew.log'. 02-08-2023 16:21:51.439 +0000 INFO WatchedFile [10392 tailreader0] - Will begin reading at offset=0 for file='ps_Z00000ldpowf9tXp9iZcoMZgvijew.log'. And the new version of the file is finally available. This is a universal forwarder. This is a Linux server. The new version of the XML file is 2233 bytes long. The length on the file does not seem to be a problem. A transform exists on the indexers to load the content as one event. This works fine. I do not believe my problem is related to initCrcLength as it did notice the file has changed. I blacklist the name of the temporary file. Switching "multiline_event_extra_waittime” true or false does not help. The ingestion and re-ingestion works fine most of the times. Maybe one every 20 files do not get re-ingested as expected. And it is usually the ones that are re-written few seconds after it got created. My question is the following: why is the file sometimes not re-indexed if the forwarder says it will do it? I can see that there can be a timing/race condition at play but the logs do not show anything other than the INFO records. Would changing the debugging level help? What other parameter in the input could help if this is a timing problem? I failed finding a solution online because pretty much all conversations related to this INFO message are about stopping the file re-ingestion. So I have not been successful in finding my needle. Any advice is welcomed. Thanks.