All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @danielbb  There was a very similar question the other day around this, please see my answer below or check out the original question at https://community.splunk.com/t5/Splunk-Enterprise-Security... See more...
Hi @danielbb  There was a very similar question the other day around this, please see my answer below or check out the original question at https://community.splunk.com/t5/Splunk-Enterprise-Security/Can-Splunk-read-a-CSV-file-and-automatically-upload-it-as-a/m-p/744948/highlight/true#M12497 The other option as you mentioned would be to use the REST API - There are some scripts at https://github.com/mthcht/lookup-editor_scripts#readme. which aim to achieve this if this is the route you wanted to go down. @livehybrid wrote:   If you have a CSV on a forwarder that you want to become a lookup in Splunk then the best way to achieve this is probably to monitor (using monitor:// in inputs.conf) the file and send it to a specific index on your Splunk indexers. Then, Create scheduled search which searches that index and retrieves the sent data and outputs it to a lookup (using | outputlookup command). Depending on how/when the CSV is updated may depend on exactly how the resulting search ends up, but ultimately this should be a viable solution. There may be other solutions but would require significantly more engineering effort.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
We have a universal forwarder and the customer has a csv file on this machine that he would like to ingest. The customer would like to ingest it as a lookup so I wonder whether we should ingest the c... See more...
We have a universal forwarder and the customer has a csv file on this machine that he would like to ingest. The customer would like to ingest it as a lookup so I wonder whether we should ingest the csv via the UF or potentially, send it via the REST api to be uploaded as a lookup. Does the latter option make sense?
Okay @Hussein_Desouky I think I have good news. I was able to replicate this by using longer items in the link list. See below (there should be 4!)  By adding the following HTML inside the <fie... See more...
Okay @Hussein_Desouky I think I have good news. I was able to replicate this by using longer items in the link list. See below (there should be 4!)  By adding the following HTML inside the <fieldset></fieldset> I was able to fix the issue, which is due to the inputs having a fixed height... <html><style type="text/css">div[data-test="radio-bar"] { height:auto; } </style></html> This then works (see below):   Full dashboard XML for my test: <form version="1.1" theme="dark"> <!-- Fieldset for dropdown input --> <fieldset submitButton="true" autoRun="true"> <input type="link" token="field1"> <label>field1</label> <choice value="Test1">Testing something longer</choice> <choice value="Test2">Testing2 something longer</choice> <choice value="Test3">Testing3 something longer</choice> <choice value="Test4">Test4 Test4 Test4 Test4 Test4 Test4</choice> </input> <html><style type="text/css">div[data-test="radio-bar"] { height:auto; } </style></html> </fieldset> <row> <panel> <event> <search> <query>|windbag</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="list.drilldown">none</option> </event> </panel> </row> </form>   So hopefully this will fix it for you too - however I do think this is a bug that needs raising to support. I'll raise it myself too but would be worth you raising it so they can allocate it to your account and keep you up to date with the progress of a permanent resolution.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @nithys  As @bowesmana mentioned - since you dont have many variances then you should specifically list them in an "IN" within your search. Then do any evals to align your different events, such... See more...
Hi @nithys  As @bowesmana mentioned - since you dont have many variances then you should specifically list them in an "IN" within your search. Then do any evals to align your different events, such as using COALESCE to map different field names into a common fieldname (e.g | eval responseTime=COALESCE(responseTime, response_time))  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @fatsug  Unfortunately the only supported deployment process for this with a SHC is via the deployer as you have described. It isnt uncommon to need to increase the maximum bundle size, however i... See more...
Hi @fatsug  Unfortunately the only supported deployment process for this with a SHC is via the deployer as you have described. It isnt uncommon to need to increase the maximum bundle size, however it sounds like you've already dealt with that side of it. Even though it is large, I dont think it should take quite so long to package and distribute - Its worth checking for any errors or warnings in _internal relating to this to see if there are any other underlying issues that could be slowing it down. The other thing to check is if the PSC app directory on your SHD (e.g Splunk_SA_Scientific_Python_linux_x86_64) has previous copies of the app within it - e.g. have you extracted a newer version over the old? If so there may be a bunch of libraries/dependencies and other files from the old version which are no longer needed. If this is the case I would recommend backing this up and then deleting before re-extracting the latest version into your $SPLUNK_HOME/etc/shcluster/apps folder.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello @livehybrid, Thank you for your prompt response. There’s no custom CSS applied to the dashboard, and we’ve also tested in incognito mode with no success unfortunately. The issue persists for... See more...
Hello @livehybrid, Thank you for your prompt response. There’s no custom CSS applied to the dashboard, and we’ve also tested in incognito mode with no success unfortunately. The issue persists for all users within our organization, regardless of the browser being used. Regards,
This may be a "dumb" question, but I'll just throw it out there while I try to work it out. The Python for Scientific Computing (PSC) app is HUGE. We have a clustered environment and Search Heads (S... See more...
This may be a "dumb" question, but I'll just throw it out there while I try to work it out. The Python for Scientific Computing (PSC) app is HUGE. We have a clustered environment and Search Heads (SH) receive configuration from our deployer. During initial setup the maximum bundle size was increased to allow pushing the PSC app from the deployer to the SHs. While it worked we've noticed that any push after adding the PSC app to the deployer now takes around 2 minutes to complete, regardless of how small of a change even if no restart is needed. I was hoping there was a way to install the PSC app locally in the search head cluster without going through the deployer. To option to "install from file" is not present in the web UI, assumably since we have a deployer for managing apps. Removing the app from the deployer and consequentially the SH cluster I tried to unpack the app into the /opt/splunk/etc/apps folder but it is then removed/deleted automatically as the cluster is restarted. Presumably since it is not available on the deployment server. So, how should we install and use PSC in a clustered environment? Is the only/correct way to to push the giant app from the deployer or is there another way to distribute the app?  All feedback and/or suggestions are welcome
Hi @Hussein_Desouky  This is unsual - it doesnt appear to behave this way for me - Are you using any custom CSS on your dashboard? Please could you try in incognito mode to rule out any cached CSS/J... See more...
Hi @Hussein_Desouky  This is unsual - it doesnt appear to behave this way for me - Are you using any custom CSS on your dashboard? Please could you try in incognito mode to rule out any cached CSS/JS on your browser?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, After upgrading from Splunk 9.1.0 to 9.4.1, we’ve noticed a display issue affecting all dashboards that use Link List filters at the top. As shown in the screenshots below, the dashboard pa... See more...
Hello, After upgrading from Splunk 9.1.0 to 9.4.1, we’ve noticed a display issue affecting all dashboards that use Link List filters at the top. As shown in the screenshots below, the dashboard panels now appear above the Link List filters, making it difficult or impossible for users to interact with the buttons underneath. Note: Converting the Link List to a Radio Button input resolves the issue, but I'm looking for a way to continue using the Link List as it worked in the previous version. Has anyone experienced this or found a workaround?                 Regards,
Love the code but  it seemed to only do one value in the lookup. What if that event (comparing host in table to event) has 2 fields that don't have null values that need compared to the 2 in the look... See more...
Love the code but  it seemed to only do one value in the lookup. What if that event (comparing host in table to event) has 2 fields that don't have null values that need compared to the 2 in the lookup table.  Like in your example they all had the same columns, 3 fields were in the table and the event had 4 different fields. But I have something to start playing with. I will continue to play with this while onboarding other stuff. Look forward to hearing from you again.
1. Again - where did you put the fields.conf? (but this shouldn't affect tstats) 2. Do you have any other _meta definitions on your UF. Did you verify the effective config with btool? 3. Try  | wa... See more...
1. Again - where did you put the fields.conf? (but this shouldn't affect tstats) 2. Do you have any other _meta definitions on your UF. Did you verify the effective config with btool? 3. Try  | walklex index=index_abc type=field over a longer time span and see if you get your id  as one of the results.  
Yes, for the fields in root, there is no problem. I omit one point : the structure json is in another structure JSON ... , hence the "SOURCE_KEY = field:message" in transforms.conf { "root": {   "... See more...
Yes, for the fields in root, there is no problem. I omit one point : the structure json is in another structure JSON ... , hence the "SOURCE_KEY = field:message" in transforms.conf { "root": {   "field1": "value1",   "message": {     "var1":132,     "var2":toto",     "var3":{},     "var4":{"A":1,"B":2},     "var5":{"C":{"D":5}}   } } After indexing, field1 is accessible, because the structure source json is seeing as a JSON. And interpreted like that. I need to parse message to find the var* like field1.  
All your picture shows is that makeresults can parse your string (which is valid json format) and extract the first level fields. This does not demonstrate that the regex you have used is fit for pur... See more...
All your picture shows is that makeresults can parse your string (which is valid json format) and extract the first level fields. This does not demonstrate that the regex you have used is fit for purpose. Here I have updated your regex to escape the double quotes to demonstrate what is being extracted as fieldname and value. You should either set your log format to json to let Splunk automatically extract the fields, or update your regex to take into account the recursive nature of json-structured data
Hi @pck_npluyaud  If the data is JSON then you shouldn’t need to extract the fields manually.  What do you get if you send the JSON but do not apply the transforms? Did this answer help you? If... See more...
Hi @pck_npluyaud  If the data is JSON then you shouldn’t need to extract the fields manually.  What do you get if you send the JSON but do not apply the transforms? Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
| tstats count where index=index_abc by id    There are no results for this query. But events are there in the index.
Hello. For reasons of JSON log splitting, I have a problem with a complex structure. The integration is in a forwarder (not UF), in transforms.conf.  For example : { "var1":132,"var2":"toto","var... See more...
Hello. For reasons of JSON log splitting, I have a problem with a complex structure. The integration is in a forwarder (not UF), in transforms.conf.  For example : { "var1":132,"var2":"toto","var3":{},"var4":{"A":1,"B":2},"var5":{"C":{"D":5}}} the expected result : "var1":132 "var2":"toto" "var3":{} "var4":{"A":1,"B":2} "var5":{"C":{"D":5}}} Actually I use [extract_message] SOURCE_KEY = field:message REGEX = "([^"]*)":("[^"}]*"|[^,"]*|\d{1,}) FORMAT = $1::$2 REPEAT_MATCH = true WRITE_META = true Online, it works ! That did not match...  
Hi @Leonardo1998  You make a good point here, do you know if logGroupIdentifier can be used for non-cross account groups? To answer your question, you cannot make changes to files from Splunkbase a... See more...
Hi @Leonardo1998  You make a good point here, do you know if logGroupIdentifier can be used for non-cross account groups? To answer your question, you cannot make changes to files from Splunkbase apps in SplunkCloud. Whilst you could clone to app and upload with a unique ID with the amendments, you would be creating a supportability and maintence nightmare for yourself unfortunately.  I think the best solution at the moment would be to raise it as a bug with Splunk Support and see if they can give you a timeline on a fix. In the meantime I will see if I can test this change with a non cross account collection.      Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi everyone, I'm working with the Splunk Add-on for AWS on Splunk Cloud, and I’ve run into an issue when trying to collect CloudWatch Logs from a cross-account AWS setup. After digging through the ... See more...
Hi everyone, I'm working with the Splunk Add-on for AWS on Splunk Cloud, and I’ve run into an issue when trying to collect CloudWatch Logs from a cross-account AWS setup. After digging through the Python code inside the add-on, I discovered that it uses the logGroupName parameter when calling describe_log_streams() via Boto3. However, in cross-account scenarios, AWS requires the use of logGroupIdentifier (with the full ARN of the log group) — and you can’t use both parameters at the same time.   So, even though AWS allows log collection across accounts using logGroupIdentifier, the current implementation in the add-on makes it impossible to use this feature correctly. I was able to identify the exact line of code that causes the issue and verified that simply replacing "logGroupName" with "logGroupIdentifier" solves the problem. Given that I'm on Splunk Cloud, I have a few questions for those with more experience in similar situations: Is it possible to modify that single line of Python code directly in the official add-on deployed in Splunk Cloud (maybe through the UI or some workaround), or is that completely locked down? I could clone the add-on, patch it, and submit it as a custom app — but would running a custom version of the AWS add-on cause issues with future Splunk Support cases? (i.e., would support be denied for data coming from a modified TA?) More broadly, for anyone who’s set up Splunk in cross-account AWS environments: What’s your recommended approach for collecting CloudWatch Logs in this scenario, given the limitations of the official add-on? Thanks in advance for any insights.
Hi @Ara  As others have said, I dont think a map is well placed here, you might find the following useful, which determines if your "someString" is present and filters those which have this. index=... See more...
Hi @Ara  As others have said, I dont think a map is well placed here, you might find the following useful, which determines if your "someString" is present and filters those which have this. index=xyz | rex field=msg "DEBUG\s+\|\s+(?<traceid>[a-f0-9-]{36})" | rex field=msg \"artifact_guid\":\"(?<artifact_guid>[a-f0-9-]{36})\" | rex field=msg \"email_address\":\"(?<email_address>[^\"]+)\" | eval isInteresting=IF(searchmatch("someString"),1,0) | stats max(isInteresting) as isInteresting, values(artifact_guid) as artifact_guid, values(email_address) as email_address by traceid | where isInteresting>0 You might be able to simplify by dealing with the rex separately with field extractions but the premise here is that you use searchmatch to check for your interesting string, then filter out events which do not have it after matching the traceId with the other fields you want.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Adding to valid @livehybrid points, you should set INDEXED_VALUE=false. It has nothing to do with the issue at hand but without it you won't be able to search for id=123 if then"123" string isn't con... See more...
Adding to valid @livehybrid points, you should set INDEXED_VALUE=false. It has nothing to do with the issue at hand but without it you won't be able to search for id=123 if then"123" string isn't contained within the raw event.