Hi @Splunkanator, If your events have an extracted uri_query field, which is typical for e.g. NCSA and W3C log formats, you can use != or NOT to exclude events: index=main sourcetype=access_common ...
See more...
Hi @Splunkanator, If your events have an extracted uri_query field, which is typical for e.g. NCSA and W3C log formats, you can use != or NOT to exclude events: index=main sourcetype=access_common uri_query!=*param=X* uri_query!=*param=Y* uri_query!=*param=Z* or index=main sourcetype=access_common NOT uri_query IN (*param=X* *param=Y* *param=Z*) However, those will exclude events with partially matching names or values. Performance will vary, but you can use the regex command to match events with fields that do no match a regular expression: index=main sourcetype=access_common | regex uri_query!="(^|&)param=(X|Y|Z)(&|$)"
Hi @yuvrajsharma_13, You're attempting to join both searches by a field named AccountIDOpened, which neither search includes. Are you trying to return all results in the outer/left search that are ...
See more...
Hi @yuvrajsharma_13, You're attempting to join both searches by a field named AccountIDOpened, which neither search includes. Are you trying to return all results in the outer/left search that are not present in the inner/right search or vice versa? Based on your description, you can find accounts that were opened but not posted by searching for opened account and excluding accounts that were posted using a subsearch: index=a "digital account opened" NOT
[ search index=b "/api/posted" 200
| rex "GET /api/posted (?<requestID>\d+) HTTP 1.1"
| rename AccountID as msg.requestID
| table msg.requestID ]
Hi @brc55, You can save an inputlookup search as an alert: | inputlookup my_lookup ``` or my_lookup.csv etc. ``` After setting a schedule, add "Send email" as a triggered action. Under the Send em...
See more...
Hi @brc55, You can save an inputlookup search as an alert: | inputlookup my_lookup ``` or my_lookup.csv etc. ``` After setting a schedule, add "Send email" as a triggered action. Under the Send email settings, select "Attach CSV." The search results will be attached the message a CSV file. If your lookup file is large (greater than 10,000 rows), you may need to modify the maxresults setting in the alert_actions.conf [email] stanza: # e.g. /opt/splunk/etc/system/local/alert_actions.conf
[email]
maxresults = 25000
Ah, you are correct. "name" is the relative distinguished name (RDN) of the object. If the object's distinguished name is CN=foo,DC=example,DC=com, the name value should be foo. accountExpires is a ...
See more...
Ah, you are correct. "name" is the relative distinguished name (RDN) of the object. If the object's distinguished name is CN=foo,DC=example,DC=com, the name value should be foo. accountExpires is a valid attribute in my Windows Server 2022 Active Directory environment. A slightly modified version of the search works for me: | ldapsearch search="(&(objectClass=user))" attrs="name,accountExpires" What other information can you provide about your Active Directory environment?
I am joining two splunk query to capture the values which is not present in subquery. Trying to find the account which opend today but not posted. But quary not retuning any values. Let me know i...
See more...
I am joining two splunk query to capture the values which is not present in subquery. Trying to find the account which opend today but not posted. But quary not retuning any values. Let me know if we have other way to get the values ? Query 1 : Returns Account opened today. index=a "digital account opened" | rename msg.requestID AccountID | table AccountID Query 2 : Account posted today. index=b "/api/posted" 200 | rex "GET /api/posted (?<accountID>\d+) HTTP 1.1" table AccountID Final Query : index=a "digital account opened" | rename msg.requestID AccountID | table AccountID | join type=left AccountIDOpened [ search index=b "/api/posted" 200 | rex "GET /api/posted (?<accountID>\d+) HTTP 1.1" table AccountID ] | search AccountIDOpened =null | table _time,AccountIDOpened
Hello, I currently upload data into a lookup table and have to also separately send this data manually to another team on a daily basis. Unfortunately, they do not/cannot have access to the lookup t...
See more...
Hello, I currently upload data into a lookup table and have to also separately send this data manually to another team on a daily basis. Unfortunately, they do not/cannot have access to the lookup table in Splunk. Is there a way to automate this a little more by sending the data in the lookup table into a report and have that report emailed to a group of users daily?
Hi @tv00638481, Make sure Splunk Add-on for Salesforce is installed on the search head and verify the lookup_sfdc_usernames KV store lookup definition is shared globally and accessible to everyone w...
See more...
Hi @tv00638481, Make sure Splunk Add-on for Salesforce is installed on the search head and verify the lookup_sfdc_usernames KV store lookup definition is shared globally and accessible to everyone who needs to use Salesforce App for Splunk. Also make sure the Lookup - USER_ID to USER_NAME saved search is enabled and scheduled. This is the search that populates the lookup. To improve performance, modify the saved search to user your Salesforce index instead of index=*. Splunk normally uses macros to specify indexes, but that was overlooked in this add-on.
No. I suggest follow up with your account team so they can see what values you are comparing and ensure they are accurate. (Or post what actual values you are looking at. All your storage forecastin...
See more...
No. I suggest follow up with your account team so they can see what values you are comparing and ensure they are accurate. (Or post what actual values you are looking at. All your storage forecasting should be is how much RAW data do you ingest a day, mulltipled by how many days you want to store in searchable + archive, and does that fit into your DDAS and DDAA entitlement? Splunk Cloud service takes care of the rest. No compression math at all vs on-prem days.
I think u mean DDAA (dynamic data active archive) is archive storage (cold storage), and no we don't only store compressed raw data. Whole bucket goes to archive storage then us copied back into obj...
See more...
I think u mean DDAA (dynamic data active archive) is archive storage (cold storage), and no we don't only store compressed raw data. Whole bucket goes to archive storage then us copied back into object store upon restore. DDAS (dynamic data active searchable) is basically "smartstore". Customers using ddas and ddaa only need to care about the raw size, not compression. Compression doesnt come into play with splunk cloud entitlements, so the old onprem math doesnt apply. now depending on what data hes actually looking at, he may or may not be seeing compressed buckets, but more often it comes down to understanding when buckets will actually roll due to timestamps in the bucket. overall in splunk cloud all you care about is raw data size as you are not sizing disk in cloud, u are sizing your subscription. https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Service/SplunkCloudservice see storage section
Thank you for the response. You mean to say the daily ingested data is compressed whenever it is coming to online active storage due to the reason , I'm only noticing a difference of ~60GB on day ...
See more...
Thank you for the response. You mean to say the daily ingested data is compressed whenever it is coming to online active storage due to the reason , I'm only noticing a difference of ~60GB on day 2 day basis.is my understanding correct.
Hi , We have onboarded Salesforce in our environment. However when we run the queries, we could notice below errors are getting continuously across the instance whenever any query is being run and a...
See more...
Hi , We have onboarded Salesforce in our environment. However when we run the queries, we could notice below errors are getting continuously across the instance whenever any query is being run and also showing on all the dashboards. [idx-i- xxxx.splunkcloud.com,idx-i-04xxxx.xxxx.splunkcloud.com,idx-i-075xxx.xxx.splunkcloud.com.idx-i- Oaxxx.xxxx.splunkcloud.com,idx-i-0be.xxxx splunkcloud.com,sh-i-026xxx.xxxx.splunkcloud.com] Could not load lookup=LOOKUP-SFDC-USER_NAME
try the following:
edit your /etc/systemd/system/Splunkd.service
in the [Service] section add the following two lines:
Environment=REQUESTS_CA_BUNDLE=/etc/ssl/ca-bundle.pem
Environment=SSL_...
See more...
try the following:
edit your /etc/systemd/system/Splunkd.service
in the [Service] section add the following two lines:
Environment=REQUESTS_CA_BUNDLE=/etc/ssl/ca-bundle.pem
Environment=SSL_CERT_FILE=/etc/ssl/ca-bundle.pem
Replace /etc/ssl/ca-bundle.pem with the path to your CA bundle with your own certificate (or keep the path and add your ca certificates to the linux os truststore)
Python standard libs (httplib,urlib3) will use the CA trust bundle specified in SSL_CERT_FILE and the requests library will use REQUESTS_CA_BUNDLE.
You didn't read the docs I pointed you to. Stitching your searches using random commands won't work. Results from a subsearch are rendered as a set of conditions to the outer search - you don't pass...
See more...
You didn't read the docs I pointed you to. Stitching your searches using random commands won't work. Results from a subsearch are rendered as a set of conditions to the outer search - you don't pass argumenta/tokens/whatever to the subsearch from the outer search. (We'll leave the map command for now).
I clicked add data at the home screen, clicked upload at the bottom, dragged in my csv, filled in the name and description and other stuff, didnt change anything on the input settings and submit