All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have configured Oauth in a custom account in the splunk salesforce Add-On app.  After configuring the account and saving the configuration it reaches out to salesforce.  I login to salesforce and i... See more...
I have configured Oauth in a custom account in the splunk salesforce Add-On app.  After configuring the account and saving the configuration it reaches out to salesforce.  I login to salesforce and its states grant access.  Once I click submit it comes back with an error   "Error occurred while trying to authenticate. Please try Again" in the app.  I am not sure what the issue is or if this is a need to configure something on the salesforce side.   
Hi! I have a table created with Splunk search with the name of the site and projects with due dates that looks like this: SITE MARCH APRIL MAY site1 project1   project2 site2 projec... See more...
Hi! I have a table created with Splunk search with the name of the site and projects with due dates that looks like this: SITE MARCH APRIL MAY site1 project1   project2 site2 project2     site3   project3   some projects are past due and some are in good standing. to determine whether it is past due i simply do an eval statement: |eval past_due=if(strptime(task_duedate,"%Y-%m-%d") < relative_time(now(), "@d"),1,0)  Other projects are in good standing. Can I color code the fields with project that are past due with red, and projects that are good standing green? Thank you!
My query searches for (Eventcode=509 OR EventCode=118) and generates output (host, Time, EventCode, Task category, Mesaage) Is it possible to use REPLACE to replace entire message field with another... See more...
My query searches for (Eventcode=509 OR EventCode=118) and generates output (host, Time, EventCode, Task category, Mesaage) Is it possible to use REPLACE to replace entire message field with another message associated with the EventCode??  
Errors dashboard shows 4k errors but not able to find the details for them the snapshots that i have under errors is only 1k.
How do I extract the cities from this text?  \"timezone\" "America/Sao_Paulo\",\"max_counter\":2,\"timezone\":\"America/Brasilia\",\"max_counter\":2... I tried to compute the following query: ... ... See more...
How do I extract the cities from this text?  \"timezone\" "America/Sao_Paulo\",\"max_counter\":2,\"timezone\":\"America/Brasilia\",\"max_counter\":2... I tried to compute the following query: ... | rex field=city "city: (?<America>)" | top limit=20 city Thanks!   
Hello, I created a choropleth map. As the map is being rendered the color for the field-value count=0 continually changes until the rendering ends. Depending on the time picker, the zero value color... See more...
Hello, I created a choropleth map. As the map is being rendered the color for the field-value count=0 continually changes until the rendering ends. Depending on the time picker, the zero value color will be different. How can I hardcode the color for count=0 to gray? Preferably throughout the rendering, it should be gray; but I'll settle for it being gray once rendering is finished. *Ultimately, I would like to use color ranges. 0 = gray 1-10 = green 11-20 = orange 21-1000=red     <panel id="ipmap"> <title>Claims World View</title> <map> <search base="mainSearch"> <query>| search (Country!="1-IP not in DB" AND Country!="2-IP Data N/A") | rex field=jsessionid "(?&lt;jsession_id&gt;.+)\." | dedup jsession_id | stats count(jsession_id) AS count BY Country | sort -count | eval count = Country + " - " + count | geom geo_countries allFeatures=True featureIdField="Country"</query> </search> <option name="drilldown">none</option> <option name="height">625</option> <option name="mapping.choroplethLayer.colorMode">categorical</option> <option name="mapping.choroplethLayer.neutralPoint">0</option> <option name="mapping.choroplethLayer.shapeOpacity">1</option> <option name="mapping.choroplethLayer.showBorder">1</option> <option name="mapping.data.maxClusters">100</option> <option name="mapping.map.center">(0,0)</option> <option name="mapping.map.panning">0</option> <option name="mapping.map.scrollZoom">0</option> <option name="mapping.map.zoom">2</option> <option name="mapping.markerLayer.markerMaxSize">50</option> <option name="mapping.markerLayer.markerMinSize">10</option> <option name="mapping.markerLayer.markerOpacity">0.8</option> <option name="mapping.showTiles">1</option> <option name="mapping.tileLayer.maxZoom">7</option> <option name="mapping.tileLayer.minZoom">0</option> <option name="mapping.tileLayer.tileOpacity">1</option> <option name="mapping.type">choropleth</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </map> </panel>   Thanks in advance for your help. Stay safe and healthy, you and yours. God bless, Genesius
I want to create a chart showing the attendance between pre covid (February) and current covid (July) for one of our offices. This is my current search which gets me the data I need but I'm unsure on... See more...
I want to create a chart showing the attendance between pre covid (February) and current covid (July) for one of our offices. This is my current search which gets me the data I need but I'm unsure on how to overlap the data so we can see the direct comparison.  | multisearch [search index="physec_app_lenel" EVDESCR="Access Granted" READERDESC="TOK*" earliest="07/01/2020:20:00:00" latest="07/28/2020:23:00:00" | eval Attendance="July"] [search index="physec_app_lenel" EVDESCR="Access Granted" READERDESC="TOK*" earliest="02/01/2020:01:00:00" latest="02/28/2020:23:00:00" | eval Attendance="February"] | timechart span=1w dc(CARDNUM) by Attendance
Hi, I have multiple records with different data_set value. I want to get each data_set record at a time. So tried using count when count is 1 display n-1 record if count is 2 display n-2 record and ... See more...
Hi, I have multiple records with different data_set value. I want to get each data_set record at a time. So tried using count when count is 1 display n-1 record if count is 2 display n-2 record and so on.. I tried using dedup but the list of column will vary so data mismatch was happening. So thought to get the data based on data_timestamp which is the data specific time.  With this I am only able to get the latest record by using the latest timestamp by field. But not able to fetch n-1, n-2 etc... records data_timestamp Here is the query used to get the latest record:         index="test" source="test_source" | where data = "data_1" and data_set IN ("set_1",set_2","set_2") and data_tag = "tag_1" | stats latest(data_timestamp) as data_timestamp by data_set | table data_timestamp | format        @splunk 
I am using REST API modular input to fetch data from some of the Endpoints. But my endpoint has the authentication method has NTLM.  In REST API modular we don't have NTLM as a default one. But i ha... See more...
I am using REST API modular input to fetch data from some of the Endpoints. But my endpoint has the authentication method has NTLM.  In REST API modular we don't have NTLM as a default one. But i have seen the custom option was available in this. can you please guide me how to add NTLM authentication to this.? The links which i found already but if i get some more guidance or examples it would be great. https://www.baboonbones.com/php/markdown.php?document=rest/README.md 
I am trying to mimic the table below. I have the count of the source IP, but how do I get the count of the respective subnets? In the example, they have it in the cell of each subnet value. Here is w... See more...
I am trying to mimic the table below. I have the count of the source IP, but how do I get the count of the respective subnets? In the example, they have it in the cell of each subnet value. Here is what I have so far:   index=xxxx sourcetype=xxxx "xxxx" | eval ERROR_CODE=case((RC="200"), "Success", (RC="400"), "User Not Found",(RC="401") , "Bad Password",(RC="500"), "Internal Server Error") | rex field=_raw "TCIP='(?<subnet_24>\d+\.\d+\.\d+)\.\d+" | rex field=_raw "TCIP='(?<subnet_16>\d+\.\d+)\.\d+\.\d+" | eval subnet_24 = subnet_24 +".x" | eval subnet_16 = subnet_16 +".x.x" | stats count(TCIP) by subnet_24 subnet_16 TCIP     Any help is appreciated!
How do I extract the date and time from my events? Event Data Sample ------------------------- Jun 4 01:27:01 rofsso504a Usage: /dev/sda1 16G 16G 20K 100% / Jun 4 02:27:01 rofsso504a Usage: /dev/... See more...
How do I extract the date and time from my events? Event Data Sample ------------------------- Jun 4 01:27:01 rofsso504a Usage: /dev/sda1 16G 16G 20K 100% / Jun 4 02:27:01 rofsso504a Usage: /dev/sda1 16G 16G 20K 100% / Jun 4 00:27:01 rofsso504a Usage: /dev/sda1 16G 16G 20K 100% / Jul 31 22:27:01 rofsso504a Usage: /dev/sda4 210G 53G 157G 26% /home Jul 31 08:27:01 rofsso504a Usage: /dev/sda4 210G 53G 157G 26% /home My Search ----------------- index=sso host=rofsso504* PartitionDiskSpaceUsed>25 earliest=-2mon | rename _raw as Event host as Host | eval Timestamp=strftime(_time, "%b %d %H:%M:%S") | table Host _time Timestamp PartitionDiskSpaceUsed Event | sort Host -Timestamp | table _time Timestamp PartitionDiskSpaceUsed Event What I want ------------------ I want the Timestamp column to contain the correct Event Date and Time, but currently it shows the DateTime of the search. 2020-06-04 00:50:56 Jun 04 01:27:01 100 Jun 4 01:27:01 rofsso504a Usage: /dev/sda1 16G 16G 20K 100% / 2020-06-04 00:50:56 Jun 04 02:27:01 100 Jun 4 02:27:01 rofsso504a Usage: /dev/sda1 16G 16G 20K 100% / 2020-06-04 00:50:56 Jun 04 00:27:01 100 Jun 4 00:27:01 rofsso504a Usage: /dev/sda1 16G 16G 20K 100% / 2020-07-31 00:50:56 Jul 31 22:27:01 26 Jul 31 22:27:01 rofsso504a Usage: /dev/sda4 210G 53G 157G 26% /home 2020-07-31 00:50:56 Jul 31 08:27:01 26 Jul 31 08:27:01 rofsso504a Usage: /dev/sda4 210G 53G 157G 26% /home What I get ------------ 2020-06-04 00:50:56 Jun 04 00:50:56 100 Jun 4 01:27:01 rofsso504a Usage: /dev/sda1 16G 16G 20K 100% / 2020-06-04 00:50:56 Jun 04 00:50:56 100 Jun 4 02:27:01 rofsso504a Usage: /dev/sda1 16G 16G 20K 100% / 2020-06-04 00:50:56 Jun 04 00:50:56 100 Jun 4 00:27:01 rofsso504a Usage: /dev/sda1 16G 16G 20K 100% / 2020-07-31 00:50:56 Jul 31 00:50:56 26 Jul 31 22:27:01 rofsso504a Usage: /dev/sda4 210G 53G 157G 26% /home 2020-07-31 00:50:56 Jul 31 00:50:56 26 Jul 31 08:27:01 rofsso504a Usage: /dev/sda4 210G 53G 157G 26% /home  
Hello guys, I can't find any procedure to fill gaps in itsi_summary index using the fill_summary_index.py script when the platform is down. Anyone can helpme with that? Thank you!
Hello Splunk Community! I currently have Splunk App for Windows Infrastructure configured and deployed to several Universal Forwarders. Most of this is working good, and I am working out some thin... See more...
Hello Splunk Community! I currently have Splunk App for Windows Infrastructure configured and deployed to several Universal Forwarders. Most of this is working good, and I am working out some things with it but before I go much further I am trying to find out how to cut back on the amount of process related data I am ingesting.  In 3 short months my windows index is almost 500gb in size! Briefly looking into the index I can see that a massive amount of events are related to the process category, and while we're interested in seeing process related data, its definitely not our top concern and this data seems to be dominating the index.  Does anyone have suggestions for how I could cut this back? 
Guys, I need a help. I've configured the props and transforms to change the host field from a lambda function I'm collecting the logs. I can see the new host, but I not find anything when I search... See more...
Guys, I need a help. I've configured the props and transforms to change the host field from a lambda function I'm collecting the logs. I can see the new host, but I not find anything when I search the new host with host=.... I done this configuration in heavy forwarder. This heavy forwarder concentrate the logs and sends it to a splunk cluster with two indexers and auto load balancing configuration. The host that I'm trying to change is coming from a lambda function from guradduty. props.conf [aws:cloudwatch:guardduty] TRANSFORMS-client = rename_host_guardduty SHOULD_LINEMERGE = false transforms.conf [rename_host_guardduty] DEST_KEY = MetaData:Host REGEX = .* FORMAT = guardduty Any ideas? Thank you.
@Garethan When trying to test the app I get the following when trying to restore a dashboard  "Restore has failed to complete successfully in app search, object of type dashboard, with name <obje... See more...
@Garethan When trying to test the app I get the following when trying to restore a dashboard  "Restore has failed to complete successfully in app search, object of type dashboard, with name <object name> " I have confirmed that Im' using the Object URI/Name (not label). In looking at the backup.log it is giving two warnings and an error:   WARNING i="splunkVCBackup" error/fatal messages in git stderroutput please review. stderrout="b'\n*** Please tell me who you are.\n\nRun\n\n git config --global user.email "you@example.com"\n git config --global user.name "Your Name"\n\nto set your account\'s default identity.\nOmit --global to set the identity only in this repository.\n\nfatal: empty ident name (for <splunk@<server.xx.xxx>) not allowed\nTo git@gitlab.com-<repo>/sharedteams/<project>/<repo>.git\n * [new tag] 2020-08-03_0746 -> 2020-08-03_0746\n'" ERROR i="splunkVCBackup" git failure occurred during runtime, not updating the epoch value. This failure may require investigation, please refer to the WARNING messages in the logs WARNING i="splunkVCBackup" wiping the git directory, dir=/opt/<repo dir>/vcbackup to allow re-cloning on next run of the script. vcbackup is the temp directory under the main repo.  
I'm adding a new search head to my cluster and keep receiving the error: CMSearchHead - 'Unable to reach master' for master=https://xxx:8089. I'm running version 7.3.4, search head cluster 3, trying... See more...
I'm adding a new search head to my cluster and keep receiving the error: CMSearchHead - 'Unable to reach master' for master=https://xxx:8089. I'm running version 7.3.4, search head cluster 3, trying to make it 4. 6 indexers, clustered.  I have a deployment server and search head deployment server. The cluster master acts as the master for all except forwarders. The nodes are sitting behind and F5 load balancer, all nodes in the same pool and are communicating.  The new search head can communicate with indexers, but not the master.  Though I can see where the search head has checked in and details are in the master.  Initialization of the search head is good, pass4symmkeys have been replaced throughout the environment to all match.  I have changed CMSearchHead logging to DEBUG and I get no other additional information about the error. Any great ideas to try?
we have heavy forwarder with splunk_ta_o365 installed  the integration is working for some time and after some time I am starting getting the below till the next Linux reboot (after the restart the ... See more...
we have heavy forwarder with splunk_ta_o365 installed  the integration is working for some time and after some time I am starting getting the below till the next Linux reboot (after the restart the integration starting working )   2020-07-16 03:05:02,350 level=ERROR pid=15582 tid=MainThread logger=splunk_ta_o365.modinputs.service_status pos=utils.py:wrapper:67 | datainput="ServiceStatus" start_time=1594857902 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/utils.py", line 65, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/service_status.py", line 135, in run session = token.auth(session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/token.py", line 56, in auth self._token = self._policy(self._resource, session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/token.py", line 37, in __call__ return self._portal.get_token_by_psk(self._client_id, self._client_secret, resource, session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 86, in get_token_by_psk 'resource': resource File "/opt/splunk/etc/apps/splunk_ta_o365/bin/3rdparty/requests/sessions.py", line 555, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/3rdparty/requests/sessions.py", line 508, in request resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/3rdparty/requests/sessions.py", line 618, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/3rdparty/requests/adapters.py", line 502, in send raise ProxyError(e, request=request) ProxyError: HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url: /c8eca3ca-1276-46d5-9d9d-a0f2a028920f/oauth2/token (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fac68273310>: Failed to establish a new connection: [Errno -2] Name or service not known',))) 2020-07-16 03:05:02,351 level=INFO pid=15582 tid=MainThread logger=splunksdc.collector pos=collector.py:run:248 | | message="Modular input exited." 2020-07-16 03:10:04,245 level=INFO pid=19359 tid=MainThread logger=splunksdc.collector pos=collector.py:run:245 | | message="Modular input started." 2020-07-16 03:10:04,259 level=INFO pid=19359 tid=MainThread logger=splunk_ta_o365.common.settings pos=settings.py:load:29 | start_time=1594858204 datainput="ServiceStatus" | message="Load proxy settings success." username="" host="10.232.233.70" enabled=True port="8080"
Hi, We have following query -      index=yyy sourcetype=zzz "RAISE_ALERT" logger="aaa" | table uuid message timestamp | eval state="alert" | append [SEARCH index=yyy sourcetype=zzz "CLEAR_ALERT" ... See more...
Hi, We have following query -      index=yyy sourcetype=zzz "RAISE_ALERT" logger="aaa" | table uuid message timestamp | eval state="alert" | append [SEARCH index=yyy sourcetype=zzz "CLEAR_ALERT" logger="aaa" | table uuid message timestamp | eval state="no_alert" ] | stats latest(state) as state by uuid       But this query is not showing anything for state, it shows only uuid.   Query before and without latest works just fine. Here is screenshot of result of everything before stats -   If we replace stats latest with stats last, we can see uuid and state, its just not the last observed value of state for that uuid.    Any idea as to why this can happen?   Update : Figured out the issue with this - the fields are being extracted using table, but there is no way for query to figure out the timestamp using extracted fields. Fields extraction is not needed for our use case anyway, removing both table clauses makes the query work.   This is the updated query, this works - index=yyy sourcetype=zzz "RAISE_ALERT" logger="aaa" | eval state="alert" | append [SEARCH index=yyy sourcetype=zzz "CLEAR_ALERT" logger="aaa" | eval state="no_alert" ] | stats latest(state) as state by uuid  
I tried to access a Free trial Cloud instance in a browser (in 2 diff. browsers). Access data is correct - checked it for multiple times. Tried with both email and username. I saw that some other u... See more...
I tried to access a Free trial Cloud instance in a browser (in 2 diff. browsers). Access data is correct - checked it for multiple times. Tried with both email and username. I saw that some other users of this forum also had this problem, but didn't get any answers.  
hi! I have an alert, which when triggered it saves "Output results to lookup" csv file. Is there a way to have a dynamic filename where the data is saved? I.e. instead of one name results.csv I woul... See more...
hi! I have an alert, which when triggered it saves "Output results to lookup" csv file. Is there a way to have a dynamic filename where the data is saved? I.e. instead of one name results.csv I would like to add date in the end: results_2020_08_03.csv or something like this.  Haven't found anything in the documentation about it. thanks in advance, przemek