All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

By "saved search" do you mean you are loading the results from a previously executed search, or re-running the search with substituted values? SVG is rendered in the browser, so your server configur... See more...
By "saved search" do you mean you are loading the results from a previously executed search, or re-running the search with substituted values? SVG is rendered in the browser, so your server configuration makes little difference here; you could try upgrading your browser environments?
What is the expected load time of a dashboard studio page in view mode, with only using saved searches? In our environment we have a dashboard page with ~140 Choropleth SVG Items, each colored by a... See more...
What is the expected load time of a dashboard studio page in view mode, with only using saved searches? In our environment we have a dashboard page with ~140 Choropleth SVG Items, each colored by a savedsearch. When loading/reloading the page, it takes 6 seconds for the overal splunk page to load, another 6 seconds to load all our SVGs and another 2 to color them. Resulting in ~14.5 seconds to load that page in total. This is running with Splunk 9.1.0.2 on an Environment with dedicated SearchHeads and Indexers on virtual machines, all NVMe Storage, plenty of RAM, ... Using a more simple dashboard (<5 SVG items and an table with a live search), the total page is loaded within 5 seconds.   Is this the expected performance? Are there any performance tweaks we could do? Things we should check/change/...?
What is the full search you are currently using (which is not giving you the results you expect)?
Hello, We are ingesting csv files from a S3 bucket using the Custom SQS based S3 input. Although, the data is pulled rightly, the fields are not getting extracted properly. The header line has been... See more...
Hello, We are ingesting csv files from a S3 bucket using the Custom SQS based S3 input. Although, the data is pulled rightly, the fields are not getting extracted properly. The header line has been ingested as a different event and the header fields are not getting extracted. I have defined the Indexed_Extractions=csv in the props.conf Is there any other way to extract a csv file from the S3 bucket? Any work around?  
Are you talking about a user preference issue or an issue in ingested data?  If data is in UTC, your user can always select UTC as their UI preference; if your application logs in a local zone AND in... See more...
Are you talking about a user preference issue or an issue in ingested data?  If data is in UTC, your user can always select UTC as their UI preference; if your application logs in a local zone AND includes zone info in data, Splunk internally still uses UTC. If data is in a different time zone but lacks zone info, that's a really bad situation.  There are several documents about how to configure time correctly.  A good place to start is Configure timestamps.  Hope this helps.
Hi @vikas_gopal, at first the configuration you defined isn't recommended by Splunk, but its isn't a production system, so it could go. About the idea to have a stand alone server containing the ol... See more...
Hi @vikas_gopal, at first the configuration you defined isn't recommended by Splunk, but its isn't a production system, so it could go. About the idea to have a stand alone server containing the old data (that are in an Indexer Cluster), you could use one of the Cluster search peers disconnecting it from the old cluster, you have to put attention to the steps to follow: disconnect from the cluster one by one all the indexers, in this way on the last remaining Indexer you'll have a copy of all the data, then you can disconnect also it from the cluster. It isn't an usual procedure and I'm not sure that it was tested, but it should work. Ciao. Giuseppe
Its not working for this requirement. I see its returning entire output field value multiple times (equal to number of lines in the field.) Note "not working" is about the least informative phrase... See more...
Its not working for this requirement. I see its returning entire output field value multiple times (equal to number of lines in the field.) Note "not working" is about the least informative phrase in the best of scenarios as it conveys virtually no information.  Yes, the original output field is expected to be attached to each row.  If you don't want to see that, filter it out.   | eval _raw = replace(output, "\|", ",") | multikv | fields - _* linecount output   The real question is: are fields DbName, CurrentSizeGB, etc., extracted? (Each row is its own event.  If you want multivalued fields in  instead, you can do some stats.)  Here is an emulation that you can play with and compare with real data:   | makeresults | eval output = "DbName|CurrentSizeGB|UsedSpaceGB|FreeSpaceGB|ExtractedDate abc|60.738|39.844|20.894|Sep 5 2023 10:00AM def|0.098|0.017|0.081|Sep 5 2023 10:00AM pqr|15.859|0.534|15.325|Sep 5 2023 10:00AM xyz|32.733|0.675|32.058|Sep 5 2023 10:00AM" ``` data emulation above ```   The above emulated input combined with the search gives CurrentSizeGB DbName ExtractedDate FreeSpaceGB UsedSpaceGB 60.738 abc Sep 5 2023 10:00AM 20.894 39.844 0.098 def Sep 5 2023 10:00AM 0.081 0.017 15.859 pqr Sep 5 2023 10:00AM 15.325 0.534 32.733 xyz Sep 5 2023 10:00AM 32.058 0.675 If these fields are not extracted as expected, you need to illustrate your original data more precisely so volunteers can help diagnose. (Anonymous as needed.)  In addition, illustration of actual output will also be helpful instead of a useless phrase like "not working".
Hi @Adpafer, what do you mean with disable Automatic AutoLoadBalancing and why do you want to do this? Anyway, if you want to send logs from one UF to a specific Indcexer (there's no reason to do t... See more...
Hi @Adpafer, what do you mean with disable Automatic AutoLoadBalancing and why do you want to do this? Anyway, if you want to send logs from one UF to a specific Indcexer (there's no reason to do this!), you can use only that address in the outputs.conf of the UF. if you want to perform a selective forwarding, see the configurations at https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_data_input Ciao. Giuseppe
Hi @anooshac, let me understand, you could have different log formats: "C:\a\b\c\abc\xyz\abc.h" or ""C:\a\b\c\abc.pqr.a1.b1.jkl\xyz\abc.h", is it correct? in this case, you could try: | rex field=... See more...
Hi @anooshac, let me understand, you could have different log formats: "C:\a\b\c\abc\xyz\abc.h" or ""C:\a\b\c\abc.pqr.a1.b1.jkl\xyz\abc.h", is it correct? in this case, you could try: | rex field=your_field "^\w*:\\\\[^\\\]*\\\\\w*\\\\[^\\\]*\\\\[^\\\]*\\\\(?<filename>.*)" that you can try using this search: | makeresults | eval your_field="C:\a\b\c\abc\xyz\abc.h" | append [ | makeresults | eval your_field="C:\a\b\c\abc.pqr.a1.b1.jkl\xyz\abc.h" ] | rex field=your_field "^\w*:\\\\[^\\\]*\\\\\w*\\\\[^\\\]*\\\\[^\\\]*\\\\(?<filename>.*)" Ciao. Giuseppe
Hi @harryhcg, as also @yuanliu hinted, you have to add another backslash to the regex: | rex "RETURN\\\\\"\:\\\\\"(?<Field2>[^\\]+)" Ciao. Giuseppe
Hi @Sponi, you cannot directly receive syslogs on Splunk Cloud. Usually the best approach is to have one (better two) Forwarder (Heavy or Universal) on premise as syslog server and it has the job t... See more...
Hi @Sponi, you cannot directly receive syslogs on Splunk Cloud. Usually the best approach is to have one (better two) Forwarder (Heavy or Universal) on premise as syslog server and it has the job to send the logs to Splunk Cloud. Ciao. Giuseppe
If you have to use regex, you will need more backslashes. | rex "@RETURN\\\\\":\\\\\"(?<Field2>[^\\\]+)"
Hello I have a restricted rsyslog client. I can there only specify a Hostname or IP and port as target to send the syslog. Where can I found the Hostname or IP for my splunk cloud to receive the a... See more...
Hello I have a restricted rsyslog client. I can there only specify a Hostname or IP and port as target to send the syslog. Where can I found the Hostname or IP for my splunk cloud to receive the according syslog?   Thank you
Has this issue been resolved, and if so, what was the solution?
If something is "not giving (you) the correct result," you need to describe what the correct result is.  In addition, you   Otherwise volunteers will be wasting their time guessing. Maybe you mean t... See more...
If something is "not giving (you) the correct result," you need to describe what the correct result is.  In addition, you   Otherwise volunteers will be wasting their time guessing. Maybe you mean the alternative NOT DisplayName="Carbon Black Cloud Sensor 64-bit"? Maybe there is something else in the data that you didn't describe that others need to know in order to help?
Hello , did you find a solution for this problem ? I'm facing the same issue and the data coming from HEC is never dropped.
Timezone issue --------different data is visible to different location users, when I select previous month.. condition : | where abc>="-1mon@mon" and abc<"@mon"   Its taking the system time not... See more...
Timezone issue --------different data is visible to different location users, when I select previous month.. condition : | where abc>="-1mon@mon" and abc<"@mon"   Its taking the system time not the common time, so the user is facing issues..   is there any query to convert to common utc value??  
Hi @gcusello , I tested it and it is working fine. The paths in my data are vary from another. I may have data something like this. In these conditions will it work. C:\a\b\c\abc.pqr.a1.b1.jkl\xyz\... See more...
Hi @gcusello , I tested it and it is working fine. The paths in my data are vary from another. I may have data something like this. In these conditions will it work. C:\a\b\c\abc.pqr.a1.b1.jkl\xyz\abc.h  
index=xxxx sourcetype="Script:InstalledApps" DisplayName="Carbon Black Cloud Sensor 64-bit" I am trying to get the list/name of host that doesnt have Carbon Black installed. Can someone help me with... See more...
index=xxxx sourcetype="Script:InstalledApps" DisplayName="Carbon Black Cloud Sensor 64-bit" I am trying to get the list/name of host that doesnt have Carbon Black installed. Can someone help me with a simple query for this.  If I do DisplayName!= and then table the host, it's not giving me the correct result.
CIDR is just a notation.  Nothing prevents you from using a 64-bit mask, i.e., host address.  For example, 2001:db8:3333:4444:5555:6666::2101/64