All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi,  I have the results of an append operation as follows: ID Col3 col4 col5 a     abc a abc No   a xyz Yes   b     abc b     xyz b xyz No   b... See more...
Hi,  I have the results of an append operation as follows: ID Col3 col4 col5 a     abc a abc No   a xyz Yes   b     abc b     xyz b xyz No   b fgh Yes   b abc No   f     abc f abc No   f xyz No   i     abc i     xyz i xyz Yes   i abc No   The result from the first table and the result from the second should be merged respectively. I cannot use | stats values(col1) values(col2) values(col3) by ID because I cannot lose the distinction between "No" and "Yes" for Col3. I want to create a result as follows: ID Col3 col4 col5 a abc No abc a xyz Yes   b xyz No xyz b fgh Yes   b abc No abc f abc No abc f xyz No   i xyz Yes xyz i abc No abc   I think something like SQL's full join would do the trick, but I am totally stuck.
So to clarify the <cids> is the placeholder for the values produced from the regex AND also the placement is where the actual value would be contained in the string, i.e. Log field?
Hey there, Your post is a couple months old, but since I stumbled into the same issue, I figured there will be more Splunkers in the future that encounter the same challenge and would appreciate if ... See more...
Hey there, Your post is a couple months old, but since I stumbled into the same issue, I figured there will be more Splunkers in the future that encounter the same challenge and would appreciate if the solution is documented somewhere. The first part of my response lays out how to resolve the issue, in the second part I talk about why the issue arises in the first place. Part 1 - How to resolve the issue Apps > DSDL App > Configuration > Setup > Check the "Yes" box > Scroll to the "Splunk Access in Jupyter (optioinal)" section > Use the following settings: Enable Splunk Access: Yes Splunk Access Token: Paste your token here. If you dont have one, you can spawn a token under Settings > Tokens. Splunk Host Address: Paste your host address here (in my case it has this format: 123.456.78.90) Splunk Management Port: 8089 (This is the default, if you did not change it, you can use 8089) Press "Test & Save" Apps > DSDL App > Configuration > Containers > Start your container. If your container was already running, stop it and restart it Apps > DSDL App > Configuration > Containers > Press the "JUPYTER LAB" button Now open the "barebone_template.ipynb" in the /notebooks folder Execute the code that pulls data from Splunk. Now it should work just fine. import libs.SplunkSearch as SplunkSearch search = SplunkSearch.SplunkSearch()   Part 2 - More details in case you are curious Execute the following code in your jupyter notebook. Here you can inspect all os variables.   import os os.environ     For us of interest are the following.   os.environ["splunk_access_host"] os.environ["splunk_access_port"] os.environ["splunk_access_token"]     If you haven't fixed the issue yet, os.environ["splunk_access_enabled"] should return "false". You most likely started the container before you made the settings as I described in part 1. These os.environ variables are important, since the function that lets you pull data from Splunk relies on them. The error in your screenshot "An error occurred: int() argument must be a sting, ..." stems from the fact that the SplunkSearch() function has no values for host/port/token.   import libs.SplunkSearch as SplunkSearch search = SplunkSearch.SplunkSearch()     You find the source code for the SplunkSearch function in your Jupyter Lab here: /notebooks/libs/SplunkSearch.py. Somewhere in the upper section of this Python code, you see the following.   if "splunk_access_enabled" in os.environ: access_enabled = os.environ["splunk_access_enabled"] if access_enabled=="1": self.host = os.environ["splunk_access_host"] self.port = os.environ["splunk_access_port"] self.token = os.environ["splunk_access_token"]   As you can see in the code above, the SplunkSearch.py reads the host, port, and token you entered on the settings page if you also set Enable Splunk Access: Yes. If you are familiar with Splunk's REST API, you recognize that host, port, and token are necessary values to establish a connection from your notebook to Splunk to eventually retrieve search results for your query. I skip the details, but here are a couple lines from SplunkSearch.py that illustrate what packages are used, the connection that is made, as well as the search query that is initiated.   import splunklib.results as splunk_results import splunklib.client as splunk_client self._service = splunk_client.connect(host=self.host, port=self.port, token=self.token) # create a search job in splunk job = self.service.jobs.create( query_cleaned, earliest_time=earliest, latest_time=latest, adhoc_search_level="smart", search_mode="normal")     I hope this helps. Regards, Gabriel  
Thank you. I was close ugh.
| rex "(?<month>\w+)-\d"
Hi deepakc and all,  took a while but I finally got around to solve this, even if in a far from elegant way. The error message appears to indeed belong to the certification process of AOB like deep... See more...
Hi deepakc and all,  took a while but I finally got around to solve this, even if in a far from elegant way. The error message appears to indeed belong to the certification process of AOB like deepakc mentioned. It's sort of a check if your app uses the best-practices or has risks etc. However, this is unlikely to have been the cause for why I wasn't able to get my data, despite my instance being able to connect to the internet.  However, there is one simple workaround:  --> Simply set the "verify" parameter in your http-request to "false". E.g:  response = helper.send_http_request("<your api link here>, "GET", parameters=None, payload=None, headers=headers, cookies=None, verify=False, cert=None, timeout=None, use_proxy=True) It's a little ugly solution but for test-purposes it does the job and I was finally able to receive the data from my API-Point.  This is probably not adivsable for productive systems or security reasons, though.  Thanks for the helpful input though and everyone else have fun while splunking! 
  Hi all I have an addon plugin that utilizes REST API to obtain specific logs; each generated event has fixed values for both source and sourcetype. Now there are customers who use props.conf and... See more...
  Hi all I have an addon plugin that utilizes REST API to obtain specific logs; each generated event has fixed values for both source and sourcetype. Now there are customers who use props.conf and transforms.conf that will change the value of the source according to a particular column within an event; for instance, if the service is 'a', then the source changes to 'service_a'; if service is 'b' then it changes to 'service_b'. The current problem is that obtaining logs works fine, and content can always be found using sourcetype. But when using transformed source to search, events cannot be found even though events with 'service_a' and 'service_b' are visible.   How should I adjust the addon or how should I configure local settings so that I can search using source?   Regards Emily
I want to extract Jan from Jan-24.
Hi @triva79 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @BRFZ  yes, see at https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Indexesconf anyway, frozenTimeperiodInSecs is the all retention time (Hot+Warm+Cold), maxHotSpanSecs is the Hot+Warm p... See more...
Hi @BRFZ  yes, see at https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Indexesconf anyway, frozenTimeperiodInSecs is the all retention time (Hot+Warm+Cold), maxHotSpanSecs is the Hot+Warm period. Ciao. Giuseppe
That perfectly resolved my problem. Many thanks!!!
The best way forward here is to set up a new community account using your new email address, then contact community support and ask them to transfer your badges etc. from your old account to your new... See more...
The best way forward here is to set up a new community account using your new email address, then contact community support and ask them to transfer your badges etc. from your old account to your new account. There isn't an easy self-service way to do this (afaik).
Hello, Is it possible to define the retention duration of logs (hot, warm and cold)  If yes, how can this be done ? Or do we only have the option to define the frozenTimePeriodInSecs ?
sorry by bad....that worked   thanks so much
that didn't seem to do anything, I am trying to sort the columns in order not the rows
Have you tried | table endpointOS * That should sort them in alphabetical order, which might be enough here.
Is that a field in Splunk that is a string.  You can do this by swapping the characters around - for your first example   | eval date=replace(date, "(\d{4})-(\d{2})", "\2-\1")   and your second ... See more...
Is that a field in Splunk that is a string.  You can do this by swapping the characters around - for your first example   | eval date=replace(date, "(\d{4})-(\d{2})", "\2-\1")   and your second   | eval date=replace(date, "(\d{4})\/Q(\d)", "Q\2/\1")   where your data field is called date
Hey not sure if anyone can help, trying to sort the columns in numerical order?   thanks in advance
P_vandereerden's reply is a good starting point, but there are two things to consider 1. The use of a subsearch to constrain an outer search may not perform well if there are a large number of reque... See more...
P_vandereerden's reply is a good starting point, but there are two things to consider 1. The use of a subsearch to constrain an outer search may not perform well if there are a large number of requests ids with that log line. If you are expecting a large number of hits for "log_line" then you may need to consider a different approach. 2. The use of transaction has limitations and although it has use cases, it's options should be understood in relation to your data set, particularly when your data set is large. Very often the stats command can be used to achieve the same thing as transaction without the limitations, so it very much depends on what you want to do with the resultant grouped data. For example this is generally a simple replacement for transaction | stats values(_raw) as _raw range(_time) as duration count by requestId which will give you the raw events, the duration from first to last and the number of events for any given request id.  
Same for First Name and Last Name (under Personal Information): Any changes made here will be reversed after the next login.