All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I do not know the number of max emails. But I believe the first answer will be sufficient and I will try and work around it on my join
Well, you probably can using the foreach command and the {} notation (or alternatively using some mv* magic with kv command in the rnd) but the question is - what for. It's usually the other way arou... See more...
Well, you probably can using the foreach command and the {} notation (or alternatively using some mv* magic with kv command in the rnd) but the question is - what for. It's usually the other way around that's the problem - to normalize your data when you have multiple fields holding "the same" data.
Hi @JandrevdM , do you know the max number of emails? if they are fixed (always the same quantity), you could use nomv and then a regex to divide the emails. Ciao. Giuseppe
Hello Splunkers,    I would like to pass the two base search when input dropdown is set as all, i need to pass a base search, when other values apart from all is selected, it need to pass a diffrent... See more...
Hello Splunkers,    I would like to pass the two base search when input dropdown is set as all, i need to pass a base search, when other values apart from all is selected, it need to pass a diffrent base search. Thanks!
Instead of having email user@domain.com userT1@domain.com I would like to then split it email 1 email 2 user@domain.com userT2@domain.com
Hi @PotatoDataUser , try using the lookup command (https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Lookup) | tstats count where index=my_index by host | lookup my_lookup.csv serv... See more...
Hi @PotatoDataUser , try using the lookup command (https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Lookup) | tstats count where index=my_index by host | lookup my_lookup.csv server_name Ciao. Giuseppe
Hi @JandrevdM , what do you mean with "split"? if you have a multivalue field, you could try with mvexpand to have a row for each value or nomv to have all the values in one row. Ciao. Giuseppe
Although removing through REST probably works I find it easier to do it this way: edit the configuration file in SPLUNK_INSTALL_DIR\etc\system\local\authentication.conf Naviate to Settings > Authe... See more...
Although removing through REST probably works I find it easier to do it this way: edit the configuration file in SPLUNK_INSTALL_DIR\etc\system\local\authentication.conf Naviate to Settings > Authentication methods > reload authentication configuration
Hi Guys, I have one master list that inculdes all items, and I want to consolidate two other time-related tables into a single chart, as shown in the example below. master list time-related... See more...
Hi Guys, I have one master list that inculdes all items, and I want to consolidate two other time-related tables into a single chart, as shown in the example below. master list time-related table 1 time-related table 2 result chart And could I use the chart to produce the pivot chart in Splunk?  
Hello, We have been facing a weird error suddenly, wherein our production Splunk cloud Enterprise Security Incident Review dashboard suddenly isn't showing the Drill down searches in any of the trig... See more...
Hello, We have been facing a weird error suddenly, wherein our production Splunk cloud Enterprise Security Incident Review dashboard suddenly isn't showing the Drill down searches in any of the triggered notables. For all of them "Something went wrong" message is thrown up. I tried changing the roles to ess_admin, tried with multiple drilldown searches but none helped. I am wondering if this is an app backend problem, but just wanted to make sure I am not missing out on anything before opening a support ticket. Any help would be greatly appreciated.
Thanks is there any way to split it, I tried this but it is not working index=collect_identities sourcetype=ldap:query | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=re... See more...
Thanks is there any way to split it, I tried this but it is not working index=collect_identities sourcetype=ldap:query | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity | eval email=split(email, "") | eval extensionAttribute10=split(extensionAttribute10, "") | eval extensionAttribute11=split(extensionAttribute11, "") | eval first=split(first, "") | eval last=split(last, "") | mvexpand email | mvexpand extensionAttribute10 | mvexpand extensionAttribute11 | mvexpand first | mvexpand last
So I have a lookup file with a complete list of servers and their details like version, owner etc, and an index my_index that gets logs from servers. This is the search I am using right now ... See more...
So I have a lookup file with a complete list of servers and their details like version, owner etc, and an index my_index that gets logs from servers. This is the search I am using right now | inputlookup my_lookup.csv | join type=left server_name [ | tstats count where index=my_index by host | eval reporting="yes"] | eval reporting=if(isnull(reporting),"No","Yes") I want to validate the list by referencing it against the tstats reports and show the whole list of the lookupfile. What I want to know is if this search is accurate, will the subsearch truncate results giving me inaccurate output, is there any alternate way to write this search, Please help.
Hi @JandrevdM , you must find a common key between the records, if identity is your key, you could try something like this: <your_search> | stats values(email) AS email values(extensionA... See more...
Hi @JandrevdM , you must find a common key between the records, if identity is your key, you could try something like this: <your_search> | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute119 AS extensionAttribute11 values(first) AS first values(last) AS last BY identity Ciao. Giuseppe
Good day, Is there a way to join all my rows into one? My simple query    index=collect_identities sourcetype=ldap:query user | dedup email | table email extensionAttribute10 extensionAttribu... See more...
Good day, Is there a way to join all my rows into one? My simple query    index=collect_identities sourcetype=ldap:query user | dedup email | table email extensionAttribute10 extensionAttribute11 first last identity     Shows results as, as I have more than one email email extensionAttribute10 extensionAttribute11 first last identity user@domain.com   user@consultant.com User Surname USurname userT1@domain.com user@domain.com user@domain.com User Surname USurname userT0@domain.com user@domain.com user@domain.com User Surname USurname I want to add a primary key that searches for "user@domain.com" and display all their email addresses that they have in one row.  Example email extensionAttribute10 extensionAttribute11 first last identity email2 email3 user@domain.com user@domain.com user@consultant.com  User Surname USurname userT1@domain.com userT0@domain.com
Hello all, I configured an app and in the asset conf, I added an environment variable "https_proxy", but somehow I see that the action still go out via the proxy, but tries to go directly to the des... See more...
Hello all, I configured an app and in the asset conf, I added an environment variable "https_proxy", but somehow I see that the action still go out via the proxy, but tries to go directly to the destination address, I opened the app code to see the referring to this variables, but I couldn't find it. Can anyone shed light and explain how can I check the referring to those variables? in other apps I manage to use the proxy variable successfully, it only happens to me with AD LDAP app 
Hi There, I have a cluster on MongoDB Atlas that contains my data connected to my application. That cluster produces some logs that can be downloaded in .log format or .gz (compressed format). To que... See more...
Hi There, I have a cluster on MongoDB Atlas that contains my data connected to my application. That cluster produces some logs that can be downloaded in .log format or .gz (compressed format). To query and view my logs easily, I would like to use Splunk  Is there any way to ingest those logs from MongoDB Atlas and into a Splunk instance via API?  If there is any,  could anyone kindly share any documentation or process on how to accomplish this? NB: I can obtain the logs from MongoDB Atlas via a cURL request  
Hi @rpfutrell , it should be a payment app. You have to contact the producer in its site. It's maintained by the developer. Ciao. Giuseppe
Hi @splunksuperman , I suppose that you're using a CSV file to inport these data. You have two choices: use as timestamp a date and time in each row of the csv file (if present), use the current... See more...
Hi @splunksuperman , I suppose that you're using a CSV file to inport these data. You have two choices: use as timestamp a date and time in each row of the csv file (if present), use the current time (the index time) as timestamp. for both the solutions, you have to add an option in your sourcetype stanza in props.conf: for the first one: [your_sourcetype] TIMESTAMP_FIELDS = <your_timestamp_field> for the second one: [your_sourcetype] DATETIME_CONFIG = CURRENT Then, to check if a server isn't sending logs, you have two choices: create a lookup containing the list of the hosts to monitor (called e.g. perimeter.csv and containing at least one field called "host") and running a search like the following that checks if there are logs from the listed hosts in the last 24 hours: | tstats count WHERE index=your_index earliest=-24h latest=now BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 or checking if a server sent logs in the last 30 days but not in the last 24 hours with the following search: | tstats latest(_time) AS latest count WHERE index=your_index earliest=-30d latest=now BY host | eval period=if(latest<now()-86400,"Previous","Latest", latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | where period="Previous" | table host latest | rename latest AS "Last Connection" Ciao. Giuseppe  
@anandhalagaras1 I am curious since I am working on a similar project, were you able to figure this out? I would appreciate it if you could share your response and your findings.
Which Forwarder agent version includes the fix for the OpenSSL 1.0.2 < 1.0.2zk vulnerability? If there is no fix for this yet, when can we expect one, or which forwarder version will include the fix... See more...
Which Forwarder agent version includes the fix for the OpenSSL 1.0.2 < 1.0.2zk vulnerability? If there is no fix for this yet, when can we expect one, or which forwarder version will include the fix to remediate this vulnerability? OpenSSL SEoL (1.0.2.x) OpenSSL 1.0.2 < 1.0.2zk Vulnerability