All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Its not working for this requirement. I see its returning entire output field value multiple times (equal to number of lines in the field.) Note "not working" is about the least informative phrase... See more...
Its not working for this requirement. I see its returning entire output field value multiple times (equal to number of lines in the field.) Note "not working" is about the least informative phrase in the best of scenarios as it conveys virtually no information.  Yes, the original output field is expected to be attached to each row.  If you don't want to see that, filter it out.   | eval _raw = replace(output, "\|", ",") | multikv | fields - _* linecount output   The real question is: are fields DbName, CurrentSizeGB, etc., extracted? (Each row is its own event.  If you want multivalued fields in  instead, you can do some stats.)  Here is an emulation that you can play with and compare with real data:   | makeresults | eval output = "DbName|CurrentSizeGB|UsedSpaceGB|FreeSpaceGB|ExtractedDate abc|60.738|39.844|20.894|Sep 5 2023 10:00AM def|0.098|0.017|0.081|Sep 5 2023 10:00AM pqr|15.859|0.534|15.325|Sep 5 2023 10:00AM xyz|32.733|0.675|32.058|Sep 5 2023 10:00AM" ``` data emulation above ```   The above emulated input combined with the search gives CurrentSizeGB DbName ExtractedDate FreeSpaceGB UsedSpaceGB 60.738 abc Sep 5 2023 10:00AM 20.894 39.844 0.098 def Sep 5 2023 10:00AM 0.081 0.017 15.859 pqr Sep 5 2023 10:00AM 15.325 0.534 32.733 xyz Sep 5 2023 10:00AM 32.058 0.675 If these fields are not extracted as expected, you need to illustrate your original data more precisely so volunteers can help diagnose. (Anonymous as needed.)  In addition, illustration of actual output will also be helpful instead of a useless phrase like "not working".
Hi @Adpafer, what do you mean with disable Automatic AutoLoadBalancing and why do you want to do this? Anyway, if you want to send logs from one UF to a specific Indcexer (there's no reason to do t... See more...
Hi @Adpafer, what do you mean with disable Automatic AutoLoadBalancing and why do you want to do this? Anyway, if you want to send logs from one UF to a specific Indcexer (there's no reason to do this!), you can use only that address in the outputs.conf of the UF. if you want to perform a selective forwarding, see the configurations at https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_data_input Ciao. Giuseppe
Hi @anooshac, let me understand, you could have different log formats: "C:\a\b\c\abc\xyz\abc.h" or ""C:\a\b\c\abc.pqr.a1.b1.jkl\xyz\abc.h", is it correct? in this case, you could try: | rex field=... See more...
Hi @anooshac, let me understand, you could have different log formats: "C:\a\b\c\abc\xyz\abc.h" or ""C:\a\b\c\abc.pqr.a1.b1.jkl\xyz\abc.h", is it correct? in this case, you could try: | rex field=your_field "^\w*:\\\\[^\\\]*\\\\\w*\\\\[^\\\]*\\\\[^\\\]*\\\\(?<filename>.*)" that you can try using this search: | makeresults | eval your_field="C:\a\b\c\abc\xyz\abc.h" | append [ | makeresults | eval your_field="C:\a\b\c\abc.pqr.a1.b1.jkl\xyz\abc.h" ] | rex field=your_field "^\w*:\\\\[^\\\]*\\\\\w*\\\\[^\\\]*\\\\[^\\\]*\\\\(?<filename>.*)" Ciao. Giuseppe
Hi @harryhcg, as also @yuanliu hinted, you have to add another backslash to the regex: | rex "RETURN\\\\\"\:\\\\\"(?<Field2>[^\\]+)" Ciao. Giuseppe
Hi @Sponi, you cannot directly receive syslogs on Splunk Cloud. Usually the best approach is to have one (better two) Forwarder (Heavy or Universal) on premise as syslog server and it has the job t... See more...
Hi @Sponi, you cannot directly receive syslogs on Splunk Cloud. Usually the best approach is to have one (better two) Forwarder (Heavy or Universal) on premise as syslog server and it has the job to send the logs to Splunk Cloud. Ciao. Giuseppe
If you have to use regex, you will need more backslashes. | rex "@RETURN\\\\\":\\\\\"(?<Field2>[^\\\]+)"
Hello I have a restricted rsyslog client. I can there only specify a Hostname or IP and port as target to send the syslog. Where can I found the Hostname or IP for my splunk cloud to receive the a... See more...
Hello I have a restricted rsyslog client. I can there only specify a Hostname or IP and port as target to send the syslog. Where can I found the Hostname or IP for my splunk cloud to receive the according syslog?   Thank you
Has this issue been resolved, and if so, what was the solution?
If something is "not giving (you) the correct result," you need to describe what the correct result is.  In addition, you   Otherwise volunteers will be wasting their time guessing. Maybe you mean t... See more...
If something is "not giving (you) the correct result," you need to describe what the correct result is.  In addition, you   Otherwise volunteers will be wasting their time guessing. Maybe you mean the alternative NOT DisplayName="Carbon Black Cloud Sensor 64-bit"? Maybe there is something else in the data that you didn't describe that others need to know in order to help?
Hello , did you find a solution for this problem ? I'm facing the same issue and the data coming from HEC is never dropped.
Timezone issue --------different data is visible to different location users, when I select previous month.. condition : | where abc>="-1mon@mon" and abc<"@mon"   Its taking the system time not... See more...
Timezone issue --------different data is visible to different location users, when I select previous month.. condition : | where abc>="-1mon@mon" and abc<"@mon"   Its taking the system time not the common time, so the user is facing issues..   is there any query to convert to common utc value??  
Hi @gcusello , I tested it and it is working fine. The paths in my data are vary from another. I may have data something like this. In these conditions will it work. C:\a\b\c\abc.pqr.a1.b1.jkl\xyz\... See more...
Hi @gcusello , I tested it and it is working fine. The paths in my data are vary from another. I may have data something like this. In these conditions will it work. C:\a\b\c\abc.pqr.a1.b1.jkl\xyz\abc.h  
index=xxxx sourcetype="Script:InstalledApps" DisplayName="Carbon Black Cloud Sensor 64-bit" I am trying to get the list/name of host that doesnt have Carbon Black installed. Can someone help me with... See more...
index=xxxx sourcetype="Script:InstalledApps" DisplayName="Carbon Black Cloud Sensor 64-bit" I am trying to get the list/name of host that doesnt have Carbon Black installed. Can someone help me with a simple query for this.  If I do DisplayName!= and then table the host, it's not giving me the correct result.
CIDR is just a notation.  Nothing prevents you from using a 64-bit mask, i.e., host address.  For example, 2001:db8:3333:4444:5555:6666::2101/64
Hi All, I am using Splunk Add-on for GCP to pull logs from log sink via pub/sub. I configured a pub/sub input inside the add on and it is successfully pulling the logs from pub/sub.  But I want... See more...
Hi All, I am using Splunk Add-on for GCP to pull logs from log sink via pub/sub. I configured a pub/sub input inside the add on and it is successfully pulling the logs from pub/sub.  But I want to confirm if  "GCP add on after receiving the messages from pub/sub sends back a ACK (acknowledgement) message to pub/sub so that same message is not sent twice or duplicated"? There is nothing mentioned about ACK messages in GCP addon documentation so asking here. Please help me out.  
Hello, I tried to input an DB with query as below:   SELECT ..., txn_stamp as TXTIME, .... FROM mybd WHERE txn_stamp > ? ORDER BY TXTIME ASC   When I hit Excecute query, the resul... See more...
Hello, I tried to input an DB with query as below:   SELECT ..., txn_stamp as TXTIME, .... FROM mybd WHERE txn_stamp > ? ORDER BY TXTIME ASC   When I hit Excecute query, the result produce error: ORA-01861: literal does not match format string. My txn_stamp is a time stamp column with the format: YYYY-mm-dd HH:MM:SS (ex: 2023-08-31 00:00:25). The curious thing is sometime it worked, Executing query show data, but it will stop at some point, I suspect it's because of the above error. My thinking is I want to formart either my db timestamp formart or the rising column timestamp formart to the same formart so it won't be a mischatch, but I don't know how.  
Yes, it is possible to add parameters to a Splunk URL to pre-populate the search query and make it more user-friendly. This can be helpful for sharing saved searches or dashboards with others so that... See more...
Yes, it is possible to add parameters to a Splunk URL to pre-populate the search query and make it more user-friendly. This can be helpful for sharing saved searches or dashboards with others so that they don't need to manually format the SPL search. To pre-populate a search query in a Splunk URL, you can use the search parameter. Here's the basic structure of a Splunk URL with a pre-populated search query: https://splunk_server:port/en-US/app/<APP_NAME>/search?q=<URL_ENCODED_SEARCH_QUERY> For example, if you want to pre-populate a search for "error messages," you can encode the query and create a URL like this: https://splunk_server:port/en-US/app/search/search?q=error%20messages When users click this URL, they will be taken to the Splunk search page with the "error messages" query already in the search bar. They can then execute the search or further refine it as needed. To create the <URL_ENCODED_SEARCH_QUERY> part of the Splunk URL, you need to URL-encode the actual SPL query you want to pre-populate in the URL. URL encoding is necessary to make sure that special characters or spaces in the query are correctly formatted for a URL. Here's an example: Let's say your SPL query is: index=myindex sourcetype=mylog "error messages" OR "warning messages" source="/var/log/app.log" To URL-encode this query, you would replace spaces with %20 and leave the rest of the query intact: index%3Dmyindex%20sourcetype%3Dmylog%20%22error%20messages%22%20OR%20%22warning%20messages%22%20source%3D%22%2Fvar%2Flog%2Fapp.log%22 So, your complete Splunk URL with the pre-populated URL-encoded search query would look like: https://splunk_server:port/en-US/app/search/search?q=index%3Dmyindex%20sourcetype%3Dmylog%20%22error%20messages%22%20OR%20%22warning%20messages%22%20source%3D%22%2Fvar%2Flog%2Fapp.log%22 You can use online URL-encoding tools (I am using CyberChef) to automatically encode your SPL query if it contains complex characters. Just paste your query into one of these tools, and it will generate the URL-encoded version for you.
@michael_vi You happen to get this list at all? I may be needing it in the near future so just trying to get a head of it.
It is a known issue related to SPL-235420. It is fixed in 9.1.0 https://docs.splunk.com/Documentation/Splunk/9.1.0/ReleaseNotes/Fixedissues#Charting.2C_reporting.2C_and_visualization_issues As a wo... See more...
It is a known issue related to SPL-235420. It is fixed in 9.1.0 https://docs.splunk.com/Documentation/Splunk/9.1.0/ReleaseNotes/Fixedissues#Charting.2C_reporting.2C_and_visualization_issues As a workaround, specify the app name directly in the dashboard definition.
Hi, I have the same problem here. Tried to set it to 60 days but it only shows 30 day data by using your spl.