All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @dilipkha, Under the hood, Boost calls getaddrinfo on Linux, which should accept IP addresses as strings. When I compile the example with g++ 8.5.0 and Boost 1.66.0 on my RHEL 8 host, the progra... See more...
Hi @dilipkha, Under the hood, Boost calls getaddrinfo on Linux, which should accept IP addresses as strings. When I compile the example with g++ 8.5.0 and Boost 1.66.0 on my RHEL 8 host, the program works as expected using http as the service:   $ g++ -o sync_client -lboost_system -lpthread sync_client.cpp $ chmod 0775 sync_client $ host httpbin.org httpbin.org has address 23.22.173.247 httpbin.org has address 52.206.0.51 $ ./sync_client 23.22.173.247 /get Date: Sun, 28 Jan 2024 04:47:22 GMT Content-Type: application/json Content-Length: 225 Connection: close Server: gunicorn/19.9.0 Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true { "args": {}, "headers": { "Accept": "*/*", "Host": "23.22.173.247", "X-Amzn-Trace-Id": "Root=1-65b5dc5a-2e13219829bcecc360851dcb" }, "origin": "x.x.x.x", "url": "http://23.22.173.247/get" }   In your implementation, you may need to use a different query constructor on line 35, e.g.:   tcp::resolver::query query(argv[1], "8089", boost::asio::ip::resolver_query_base::numeric_host | boost::asio::ip::resolver_query_base::numeric_service);   Note that I've also replaced "http" with "8089" to use the default Splunk management port. On most systems, the http service resolves to port 80. See e.g. /etc/services.
Hi, i'm using the splunk cloud platform for a  school project. When I import my csv files into splunk, it doesn't seem to recognise the headers of my csv as a field. Does anyone know how to get splun... See more...
Hi, i'm using the splunk cloud platform for a  school project. When I import my csv files into splunk, it doesn't seem to recognise the headers of my csv as a field. Does anyone know how to get splunk to recognise my headers? thanks for any help
At a glance, it's a score calculated from _audit data based on search run time, the absence of an index predicate, the presence of prestats transforming commands, the position of other transforming c... See more...
At a glance, it's a score calculated from _audit data based on search run time, the absence of an index predicate, the presence of prestats transforming commands, the position of other transforming commands, memory use, and the presence of an initial makeresults or metadata command. Pain is inversely proportional to efficiency. @sideview may be lurking. Have you tried contacting them directly?
Hi @klim, I don't have an active IdP to validate, but as I recall, you would specify your preferred mapping as the Name ID format/attribute in the SAML IdP and not in the SAML SP (Splunk). Home dir... See more...
Hi @klim, I don't have an active IdP to validate, but as I recall, you would specify your preferred mapping as the Name ID format/attribute in the SAML IdP and not in the SAML SP (Splunk). Home directories can be managed at the file system level in $SPLUNK_HOME/etc/users by renaming directories. Ownership of most knowledge objects can be changed from Settings > All Configurations > Reassign Knowledge Objects. For the few objects that can't be reassigned via the user interface, you'll need to update all instances of $SPLUNK_HOME/etc/apps/*/metadata/*.meta as needed.
The scheduled alert should be owned by a user with access to the app and (probably) be saved within the app. Their access to Splunk should have no bearing on whether they can access MIME attachments ... See more...
The scheduled alert should be owned by a user with access to the app and (probably) be saved within the app. Their access to Splunk should have no bearing on whether they can access MIME attachments in their email client; however, they may not not be able to access any links you include. The CSV file will be an attachment, not a link.
Hi @Splunkanator, If your events have an extracted uri_query field, which is typical for e.g. NCSA and W3C log formats, you can use != or NOT to exclude events: index=main sourcetype=access_common ... See more...
Hi @Splunkanator, If your events have an extracted uri_query field, which is typical for e.g. NCSA and W3C log formats, you can use != or NOT to exclude events: index=main sourcetype=access_common uri_query!=*param=X* uri_query!=*param=Y* uri_query!=*param=Z* or index=main sourcetype=access_common NOT uri_query IN (*param=X* *param=Y* *param=Z*) However, those will exclude events with partially matching names or values. Performance will vary, but you can use the regex command to match events with fields that do no match a regular expression: index=main sourcetype=access_common | regex uri_query!="(^|&)param=(X|Y|Z)(&|$)"
Will look into this. The lookup is within/associated to a specific app that the other team does not have access to. Would this cause an issue?
Hi @yuvrajsharma_13, You're attempting to join both searches by a field named AccountIDOpened, which neither search includes. Are you trying to return all results in the outer/left search that are ... See more...
Hi @yuvrajsharma_13, You're attempting to join both searches by a field named AccountIDOpened, which neither search includes. Are you trying to return all results in the outer/left search that are not present in the inner/right search or vice versa? Based on your description, you can find accounts that were opened but not posted by searching for opened account and excluding accounts that were posted using a subsearch: index=a "digital account opened" NOT [ search index=b "/api/posted" 200 | rex "GET /api/posted (?<requestID>\d+) HTTP 1.1" | rename AccountID as msg.requestID | table msg.requestID ]
if your mail server limits attachment size or otherwise restricts message content, you should contact your mail administrator.
Hi @brc55, You can save an inputlookup search as an alert: | inputlookup my_lookup ``` or my_lookup.csv etc. ``` After setting a schedule, add "Send email" as a triggered action. Under the Send em... See more...
Hi @brc55, You can save an inputlookup search as an alert: | inputlookup my_lookup ``` or my_lookup.csv etc. ``` After setting a schedule, add "Send email" as a triggered action. Under the Send email settings, select "Attach CSV." The search results will be attached the message a CSV file. If your lookup file is large (greater than 10,000 rows), you may need to modify the maxresults setting in the alert_actions.conf [email] stanza: # e.g. /opt/splunk/etc/system/local/alert_actions.conf [email] maxresults = 25000  
Ah, you are correct. "name" is the relative distinguished name (RDN) of the object. If the object's distinguished name is CN=foo,DC=example,DC=com, the name value should be foo. accountExpires is a ... See more...
Ah, you are correct. "name" is the relative distinguished name (RDN) of the object. If the object's distinguished name is CN=foo,DC=example,DC=com, the name value should be foo. accountExpires is a valid attribute in my Windows Server 2022 Active Directory environment. A slightly modified version of the search works for me: | ldapsearch search="(&(objectClass=user))" attrs="name,accountExpires" What other information can you provide about your Active Directory environment?
@gcusello 
I am joining two splunk query to capture the  values which is not present in subquery.  Trying to find the account which opend today but not posted. But quary not retuning any values. Let me know i... See more...
I am joining two splunk query to capture the  values which is not present in subquery.  Trying to find the account which opend today but not posted. But quary not retuning any values. Let me know if we have other way to get the values ?   Query 1 :  Returns Account opened today.  index=a  "digital account opened" | rename msg.requestID AccountID | table AccountID Query 2 : Account posted today. index=b "/api/posted" 200  | rex "GET /api/posted (?<accountID>\d+) HTTP 1.1" table AccountID   Final Query :  index=a  "digital account opened" | rename msg.requestID AccountID | table AccountID  | join type=left  AccountIDOpened [ search index=b "/api/posted" 200  | rex "GET /api/posted (?<accountID>\d+) HTTP 1.1" table AccountID ] | search AccountIDOpened =null | table _time,AccountIDOpened  
Hello, I currently upload data into a lookup table and have to also separately send this data manually to another team on a daily basis. Unfortunately, they do not/cannot have access to the lookup t... See more...
Hello, I currently upload data into a lookup table and have to also separately send this data manually to another team on a daily basis. Unfortunately, they do not/cannot have access to the lookup table in Splunk. Is there a way to automate this a little more by sending the data in the lookup table into a report and have that report emailed to a group of users daily?    
Hi @tv00638481, Make sure Splunk Add-on for Salesforce is installed on the search head and verify the lookup_sfdc_usernames KV store lookup definition is shared globally and accessible to everyone w... See more...
Hi @tv00638481, Make sure Splunk Add-on for Salesforce is installed on the search head and verify the lookup_sfdc_usernames KV store lookup definition is shared globally and accessible to everyone who needs to use Salesforce App for Splunk. Also make sure the Lookup - USER_ID to USER_NAME saved search is enabled and scheduled. This is the search that populates the lookup. To improve performance, modify the saved search to user your Salesforce index instead of index=*. Splunk normally uses macros to specify indexes, but that was overlooked in this add-on.
No. I suggest follow up with your account team so they can see what values you are comparing and ensure they are accurate. (Or post what actual values you are looking at. All your storage forecastin... See more...
No. I suggest follow up with your account team so they can see what values you are comparing and ensure they are accurate. (Or post what actual values you are looking at. All your storage forecasting should be is how much RAW data do you ingest a day, mulltipled by how many days you want to store in searchable + archive,  and does that fit into your DDAS and DDAA entitlement? Splunk Cloud service takes care of the rest.  No compression math at all vs on-prem days.  
I think u mean DDAA (dynamic data active archive) is archive storage (cold storage), and no we don't only store compressed raw data.  Whole bucket goes to archive storage then us copied back into obj... See more...
I think u mean DDAA (dynamic data active archive) is archive storage (cold storage), and no we don't only store compressed raw data.  Whole bucket goes to archive storage then us copied back into object store upon restore. DDAS (dynamic data active searchable) is basically "smartstore".  Customers using ddas and ddaa only need to care about the raw size, not compression. Compression doesnt come into play with splunk cloud entitlements, so the old onprem math doesnt apply. now depending on what data hes actually looking at, he may or may not be seeing compressed buckets, but more often it comes down to understanding when buckets will actually roll due to timestamps in the bucket. overall in splunk cloud all you care about is raw data size as you are not sizing disk in cloud, u are sizing your subscription. https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Service/SplunkCloudservice see storage section
Thank you let me go thru the documentation.
Thank you for the response. You mean to say the daily ingested  data is compressed whenever it is coming to online active storage due to the reason , I'm only noticing a difference of ~60GB on day ... See more...
Thank you for the response. You mean to say the daily ingested  data is compressed whenever it is coming to online active storage due to the reason , I'm only noticing a difference of ~60GB on day 2 day basis.is my understanding correct.
Hi , We have onboarded Salesforce in our environment. However when we run the queries, we could notice below errors are getting continuously across the instance whenever any query is being run and a... See more...
Hi , We have onboarded Salesforce in our environment. However when we run the queries, we could notice below errors are getting continuously across the instance whenever any query is being run and also showing on all the dashboards. [idx-i- xxxx.splunkcloud.com,idx-i-04xxxx.xxxx.splunkcloud.com,idx-i-075xxx.xxx.splunkcloud.com.idx-i- Oaxxx.xxxx.splunkcloud.com,idx-i-0be.xxxx splunkcloud.com,sh-i-026xxx.xxxx.splunkcloud.com] Could not load lookup=LOOKUP-SFDC-USER_NAME