All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

| rex "(?<username>[^:]+):(?<passwd>[^:]+):(?<path>[^:]+)"
How to highlight empty fields in the dashboard in colours . Simple step pls 
John:x:/home/John:/bin/bash    is there a way to extract the field from above with colon separated.  We have many users in the above format from /etc/passwd  John - username  x - passwd  /ho... See more...
John:x:/home/John:/bin/bash    is there a way to extract the field from above with colon separated.  We have many users in the above format from /etc/passwd  John - username  x - passwd  /home/John - path 
Hi @Splunkerninja , the only way to have different retention policies is to store logs in different indexes. Usually you choose to have different indexes based on two parameters: retention policy a... See more...
Hi @Splunkerninja , the only way to have different retention policies is to store logs in different indexes. Usually you choose to have different indexes based on two parameters: retention policy and Access Grants. You cannot have different retention policies on data in the same index. This means that you have to create a rule on your Indexers or (if present) on Heavy Forwarders to override the default index value. Ciao. Giuseppe
@gcusello @richgalloway @ITWhisperer  ++ ++   Any suggestions please?
Use the strptime function to convert text timestamps into epoch format.  Then use strftime to convert epoch timestamps into human-readable form. | eval epochTime=strptime(plugin_date, "%Y%m%d%H%M") ... See more...
Use the strptime function to convert text timestamps into epoch format.  Then use strftime to convert epoch timestamps into human-readable form. | eval epochTime=strptime(plugin_date, "%Y%m%d%H%M") | eval humanTime=strftime(epochTime, "%Y-%m-%d %H:%M")  
We are indexing email metadata logs from various regions (china,US,Mexico,Italy) in Splunk Cloud. The retention policy of these metadata logs is 270 days. We want to change the retention policy of f... See more...
We are indexing email metadata logs from various regions (china,US,Mexico,Italy) in Splunk Cloud. The retention policy of these metadata logs is 270 days. We want to change the retention policy of few of the regions. For example, we want to store China metadata logs only for 30 days and all other logs for 270 days. How to achieve this?  Appreciate any kind of input here.
In my splunk search for getting the date of Nessus plugins feed version used in a scan I get the number returned in the orginal format used of year-month-date-time (for example November 7th 2023 at 1... See more...
In my splunk search for getting the date of Nessus plugins feed version used in a scan I get the number returned in the orginal format used of year-month-date-time (for example November 7th 2023 at 1200 would display as 202311071200) that I would like to convert into a readable format that I can then manipluate in splunk such as if i want to get the epoch time. How would I go about doing this?
No, the number after the slash / denotes how many significant bit are to be compared when the addresses are converted to binary. There are 32 bits in the binary representation of an IPv4 address, so,... See more...
No, the number after the slash / denotes how many significant bit are to be compared when the addresses are converted to binary. There are 32 bits in the binary representation of an IPv4 address, so, for example, /24 means just compare the first 24 bits of the addresses. This is equivalent to ignoring the last byte or masking with 0xFFFFFF00, or shifting right by 8 bits (32 - 24)
Apparently my Google-Fu isn't the best and I can't find an explanation. Can someone please enlighten me?  I have a lookup table that looks like this:  CIDR, ip_address 24, 1.2.3.4/24 23, 5.6.7.8/... See more...
Apparently my Google-Fu isn't the best and I can't find an explanation. Can someone please enlighten me?  I have a lookup table that looks like this:  CIDR, ip_address 24, 1.2.3.4/24 23, 5.6.7.8/23 I wanted events with source ips that match the ip addresses in the lookup table with destination ips that do not match the ip addresses in the lookup table.  I ran the following query, and this appears to work (unless its actually not??)  index="index1" | lookup lookup1 ip_address as src_ip OUTPUTNEW ip_address as address | where dest_ip!=address My confusion stems from the fact that ip_address is in CIDR notation. The way my mind is processing this query is that a new field called address is created, and the value of dest_ip is compared against the value of address. However, the value of address is in CIDR notation, and dest_ip is not.  Is address treated as a list and the value of dest_ip is checked against each item in the list?   
As far as I can tell your configuration checks out. The problem could be that the input for salesforce is over an API/Script. Different types of inputs behave differently with the configurations and ... See more...
As far as I can tell your configuration checks out. The problem could be that the input for salesforce is over an API/Script. Different types of inputs behave differently with the configurations and some even skip certain parts of the pipeline. If it was a normal Monitoring Input and your mentioned configuration is on the indexer everything should work. I have to gather more info myself regarding the transformation of the API inputs to help you further on this. 
Hi, I  have created a Cluster Map that show number of counts  based on number of ASA blocked actions.  The circle size is based on number of hits. A bigger circle represent more counts than a small ... See more...
Hi, I  have created a Cluster Map that show number of counts  based on number of ASA blocked actions.  The circle size is based on number of hits. A bigger circle represent more counts than a small circle.  So far so good. It looks ok, but would be even better if I could change color based on number of counts/hits.  Is it also possible change color based on destination portnumber (80,23,22++)    Thanks  Geir
I've tried props.conf and transforms.conf both on Heavy Forwarder where the add-on is installed and on indexers with no succeed, it seems that are ignored anche all the event_type are being collected... See more...
I've tried props.conf and transforms.conf both on Heavy Forwarder where the add-on is installed and on indexers with no succeed, it seems that are ignored anche all the event_type are being collected. This is the splunk doc I've followed: https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Keep_specific_events_and_discard_the_rest transforms.conf   [setnull_EVENT_TYPE] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing_EVENT_TYPE] REGEX = EVENT_TYPE\=\"LightningPageView\" DEST_KEY = queue FORMAT = indexQueue     props.conf   [sfdc:logfile] TRANSFORMS-setEventType = setnull_EVENT_TYPE, setparsing_EVENT_TYPE     This is a sample log:   2023-11-06T22:58:22.108+0000 SFDCLogType="LightningPageView" SFDCLogId="aaaa" SFDCLogDate="2023-11-06T00:00:00.000+0000" EVENT_TYPE="LightningPageView" TIMESTAMP="20231106225822.108" REQUEST_ID="TID:aaaa" ORGANIZATION_ID="aaaa" USER_ID="aaaa" CLIENT_ID="" SESSION_KEY="aaaa" LOGIN_KEY="aaaa/aaaa" USER_TYPE="Standard" APP_NAME="aaaa:aaaa" DEVICE_PLATFORM="SFX:BROWSER:DESKTOP" SDK_APP_VERSION="" OS_NAME="WINDOWS" OS_VERSION="10" USER_AGENT=""Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36 Edg/119.0.0.0"" BROWSER_NAME="EDGE" BROWSER_VERSION="119" SDK_VERSION="" DEVICE_MODEL="" DEVICE_ID="" SDK_APP_TYPE="" CLIENT_GEO="Italy/Rome" CONNECTION_TYPE="" UI_EVENT_ID="ltng:pageView" UI_EVENT_SOURCE="" UI_EVENT_TIMESTAMP="1699311501485" PAGE_START_TIME="1699310036553" DURATION="588.0" EFFECTIVE_PAGE_TIME_DEVIATION="false" EFFECTIVE_PAGE_TIME_DEVIATION_REASON="" EFFECTIVE_PAGE_TIME_DEVIATION_ERROR_TYPE="" EFFECTIVE_PAGE_TIME="588.0" DEVICE_SESSION_ID="aaaa" UI_EVENT_SEQUENCE_NUM="22" PAGE_ENTITY_ID="" PAGE_ENTITY_TYPE="Case" PAGE_CONTEXT="force:objectHomeDesktop" PAGE_URL="/lightning/o/Case/list?filterName=Recent" PAGE_APP_NAME="LightningService" PREVPAGE_ENTITY_ID="aaaa" PREVPAGE_ENTITY_TYPE="Case" PREVPAGE_CONTEXT="one:recordHomeFlexipage2Wrapper" PREVPAGE_URL="/lightning/r/Case/aaaa/view" PREVPAGE_APP_NAME="LightningService" TARGET_UI_ELEMENT="" PARENT_UI_ELEMENT="" GRANDPARENT_UI_ELEMENT="" TIMESTAMP_DERIVED="2023-11-06T22:58:22.108Z" USER_ID_DERIVED="aaaa" CLIENT_IP="xxx.xxx.xxx.xxx" UserAccountId="aaaa" SplunkRetrievedServer="https://aaaa.my.salesforce.com"      
Hi @TISKAR , I never saw this issue. Open a case to Splunk Support. Ciao. Giuseppe
Hi @beneteos , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
You are correct about the size being affected by backup and save files.  The limit.conf setting is for the base file only so the total could be 4x that value. I don't see how the nmon TA has any bea... See more...
You are correct about the size being affected by backup and save files.  The limit.conf setting is for the base file only so the total could be 4x that value. I don't see how the nmon TA has any bearing on this.
Yes @gcusello , i try by private navigation, and by clear brower cach, but i have the same problem
Thank you very much for the support @yuanliu . Your query works perfectly. However, it gave me more results than expected. Basically, I want only to see logs from Palo that only threat field equal... See more...
Thank you very much for the support @yuanliu . Your query works perfectly. However, it gave me more results than expected. Basically, I want only to see logs from Palo that only threat field equal to "SMB: User Password Brute Force Attempt(40004)" and resolve Windows logs fields (ComputerName & username) + Symantec logs fields (Symantec Detected User @ Destination & Symantec Destination Node) based on the dest_ip value of the Palo logs.  For your understanding below is what I get from your query (10000 statistics - 24hrs): I need something similar to the below table (106 statistics - Last 24hrs): Once again thank you very much for your support in this. Hope you can figure out my real requirement with this? Cheers, NeoKevin
Your sample events didn't have duplicates in. Please share some representative unformatted events and explain what your expected results would be from those events.
Some KOs can be assigned to new apps.  Look for "move" as an edit option. For those KOs that cannot be moved, you'll have to do so manually.  Create a new object by the same name in another app and ... See more...
Some KOs can be assigned to new apps.  Look for "move" as an edit option. For those KOs that cannot be moved, you'll have to do so manually.  Create a new object by the same name in another app and copy/paste the values from the original app.  Then delete the original object.  In Splunk Cloud, do this in an app you then upload.  If the original app is Search then you cannot upload a replacement app and will have to manipulate the object in the UI.