All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have converted the date field to epoch time and results obtained from the query is sorted as expected, but while clicking on the sort option in the table for the Date Field, date between 2021 and 2... See more...
I have converted the date field to epoch time and results obtained from the query is sorted as expected, but while clicking on the sort option in the table for the Date Field, date between 2021 and 2020 are not sorting in the ascending order. I am using Splunk 7.0 version, is this because of the instance issue? because from my query i am getting correct result only when clicked on sort icon it is not sorted properly.  
Hi there,  I have a free trial for Splunk and am trying to install the Azure Add In to be able to do some testing with Event Hubs and similar however this requires a support case to be raised which ... See more...
Hi there,  I have a free trial for Splunk and am trying to install the Azure Add In to be able to do some testing with Event Hubs and similar however this requires a support case to be raised which I can't do as I don't have an active support contract.  Is there another way I can get the add-in installed? Thanks, 
Hi All, Does anyone else have an issue where the Crowdstrike Stream modular input stops working? The process is still there but is hung. Killing the process allows Splunk to re-create the .py proces... See more...
Hi All, Does anyone else have an issue where the Crowdstrike Stream modular input stops working? The process is still there but is hung. Killing the process allows Splunk to re-create the .py process and start collecting again. (this happens with other modular inputs too). Why does this happen? Is this being addressed?
Our system currently has grown over time with 1000's of enrichments, TA and custom apps. We were planning to upgrade Splunk to another version and wanted to do as much testing automated So i've deve... See more...
Our system currently has grown over time with 1000's of enrichments, TA and custom apps. We were planning to upgrade Splunk to another version and wanted to do as much testing automated So i've developed plan to pump Live data pre & post change in TEST machine thus detecting the important fields, eventtypes, tags are working correctly. But this measurement is done manually Is there an easy way or module to detect such anomalies or divergence if we give a set of "fields" it should detect for? For example, what i'm looking for is   # set of key-fields user eventtypes tags host     I need to detect if the values of these `key-fields` have dramatically changed between two cycles (or dates), thus we can say a particular TA or upgrade caused to break those fields  
Hi I want to calculate Min and Max Response time only if the status is success. Below is the table format: MicroService Total_Transaction Success Failure Avg Failure% Success% Min... See more...
Hi I want to calculate Min and Max Response time only if the status is success. Below is the table format: MicroService Total_Transaction Success Failure Avg Failure% Success% Min_Response_Time Max_Response_time                   LostStolenCard 7 5 2 5223 28.57 71.43     CreditLimit 15 0 15 4324 0 100    
Since updating to 8.1.1 on our Heavy Forwarder we've been seeing the below error message within the internal logs of the add-on, has anyone else experienced this and aware of a fix? Traceback (most ... See more...
Since updating to 8.1.1 on our Heavy Forwarder we've been seeing the below error message within the internal logs of the add-on, has anyone else experienced this and aware of a fix? Traceback (most recent call last): File "/opt/splunk/etc/apps/syndication/bin/modular_input.zip/modular_input/modular_input_base_class.py", line 1095, in execute self.do_run(in_stream, log_exception_and_continue=True) File "/opt/splunk/etc/apps/syndication/bin/modular_input.zip/modular_input/modular_input_base_class.py", line 976, in do_run self.run(stanza, cleaned_params, input_config) File "/opt/splunk/etc/apps/syndication/bin/syndication.py", line 383, in run if last_entry_date_retrieved is not None and last_entry_date_retrieved > last_entry_date: TypeError: '>' not supported between instances of 'time.struct_time' and 'NoneType'   @LukeMurphey 
Hello all, May I know whether its possible to create multiple tenets under my cloud account please? I am trying to to get multiple customers data separately process and  projected. So each customer ... See more...
Hello all, May I know whether its possible to create multiple tenets under my cloud account please? I am trying to to get multiple customers data separately process and  projected. So each customer will get separate dashboards with their data only Thank you in advance  CM
Please assume the below in transforms.conf [send_rawevents] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = indexer1 [send_to_null_tcp] REGEX = CEF\:0\|ids DEST_KEY = _TCP_ROUTING FORMAT = nothing [sen... See more...
Please assume the below in transforms.conf [send_rawevents] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = indexer1 [send_to_null_tcp] REGEX = CEF\:0\|ids DEST_KEY = _TCP_ROUTING FORMAT = nothing [send_to_syslog] REGEX = CEF\:0\|ids DEST_KEY = _SYSLOG_ROUTING FORMAT = syslog_group The objective here is to send all event to the tcp out unless they match the regex CEF:0|ids in which case events should be sent to the syslog out. What I can't sort out is how to reset the _TCP_ROUTING back to nothing in those events that are routed to syslog (since I don't want to have them duplicated). Anyone has any idea here ? Thanks ! (More details can be found here : https://www.linkedin.com/pulse/how-make-splunk-heavy-forwarder-reiterate-over-after-changing-efi/)
I can get cloudwatch log and description successfull but not S3 buckets. My splunk EC2 has full access right to the s3 buckets but they are not listed in the Add-on setting page. I tried to get the... See more...
I can get cloudwatch log and description successfull but not S3 buckets. My splunk EC2 has full access right to the s3 buckets but they are not listed in the Add-on setting page. I tried to get the s3 buckets from Add data, but I got internal logs like: "The last data ingestion itertion hasn't been completed yet" I attcheched the internal logs, hope soemone can help. ---------------internal logs----------------------------------------------2021-01-18 18:57:59,217 level=ERROR pid=11508 tid=Thread-4 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:index_data:91 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="Failed to collect data through generic S3." start_time=1610963333 job_uid="96ba7b5d-6dff-4d81-bc6b-b4581d4d3066" Traceback (most recent call last):   File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 86, in index_data     self._do_index_data()   File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 107, in _do_index_data     self.collect_data()   File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 153, in collect_data     self._discover_keys(index_store)   File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 224, in _discover_keys     bucket = self._get_bucket(credentials)   File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 370, in _get_bucket     bucket = conn.get_bucket(self._config[asc.bucket_name])   File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/boto/s3/connection.py", line 509, in get_bucket     return self.head_bucket(bucket_name, headers=headers)   File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/boto/s3/connection.py", line 528, in head_bucket     response = self.make_request('HEAD', bucket_name, headers=headers)   File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/boto2_s3_patch.py", line 12, in wrapper     response = func(*args, **kwargs)   File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/boto/s3/connection.py", line 671, in make_request     retry_handler=retry_handler   File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/boto/connection.py", line 1084, in make_request     retry_handler=retry_handler)   File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/boto/connection.py", line 1043, in _mexe     raise ex   File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/boto/connection.py", line 956, in _mexe     request.body, request.headers)   File "/opt/splunk/lib/python3.7/http/client.py", line 1244, in request     self._send_request(method, url, body, headers, encode_chunked)   File "/opt/splunk/lib/python3.7/http/client.py", line 1290, in _send_request     self.endheaders(body, encode_chunked=encode_chunked)   File "/opt/splunk/lib/python3.7/http/client.py", line 1239, in endheaders     self._send_output(message_body, encode_chunked=encode_chunked)   File "/opt/splunk/lib/python3.7/http/client.py", line 1026, in _send_output     self.send(msg)   File "/opt/splunk/lib/python3.7/http/client.py", line 966, in send     self.connect()   File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/boto/https_connection.py", line 119, in connect     sock = socket.create_connection((self.host, self.port), self.timeout)   File "/opt/splunk/lib/python3.7/socket.py", line 727, in create_connection     raise err   File "/opt/splunk/lib/python3.7/socket.py", line 716, in create_connection     sock.connect(sa) socket.timeout: timed out 2021-01-18 18:57:53,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:57:23,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:56:53,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:56:23,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:55:53,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:55:23,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:54:53,444 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:54:23,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:53:53,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:53:23,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:52:53,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:52:23,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:51:53,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:51:23,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:50:53,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:50:23,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:49:53,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:49:23,442 level=INFO pid=11508 tid=Thread-13 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:96 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test" | message="The last data ingestion iteration hasn't been completed yet." 2021-01-18 18:48:53,545 level=INFO pid=11508 tid=Thread-4 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_get_bucket:365 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test", start_time=1610963333 job_uid="96ba7b5d-6dff-4d81-bc6b-b4581d4d3066", phase="discover_key" | message="Create new S3 connection." 2021-01-18 18:48:53,545 level=INFO pid=11508 tid=Thread-4 logger=splunk_ta_aws.common.aws_credentials pos=aws_credentials.py:load:163 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test", start_time=1610963333 job_uid="96ba7b5d-6dff-4d81-bc6b-b4581d4d3066", phase="discover_key" | message="load credentials succeed" arn="arn:aws:sts::052086164386:assumed-role/splunk_ec2_access/i-000c869a7adf815b0" expiration="2021-01-18 15:23:20+00:00" 2021-01-18 18:48:53,444 level=INFO pid=11508 tid=Thread-4 logger=splunk_ta_aws.common.aws_credentials pos=aws_credentials.py:_load_source_credentials:195 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test", start_time=1610963333 job_uid="96ba7b5d-6dff-4d81-bc6b-b4581d4d3066", phase="discover_key" | message="fetch ec2 instance credentials" 2021-01-18 18:48:53,444 level=INFO pid=11508 tid=Thread-4 logger=splunk_ta_aws.common.aws_credentials pos=aws_credentials.py:load:156 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test", start_time=1610963333 job_uid="96ba7b5d-6dff-4d81-bc6b-b4581d4d3066", phase="discover_key" | message="begin loading credentials" aws_account="splunk_ec2_access" aws_iam_role=None 2021-01-18 18:48:53,444 level=INFO pid=11508 tid=Thread-4 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_discover_keys:220 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test", start_time=1610963333 job_uid="96ba7b5d-6dff-4d81-bc6b-b4581d4d3066", phase="discover_key" | message="Start of discovering S3 keys." 2021-01-18 18:48:53,443 level=INFO pid=11508 tid=Thread-4 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:collect_data:143 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test", start_time=1610963333 job_uid="96ba7b5d-6dff-4d81-bc6b-b4581d4d3066" | message="Start processing" last_modified="2020-12-17T07:30:59.000Z" latest_scanned="" 2021-01-18 18:48:53,442 level=INFO pid=11508 tid=Thread-4 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:105 | datainput="s3_test_122401" bucket_name="aws-sb-dev-ia0767-splunk-test", start_time=1610963333 job_uid="96ba7b5d-6dff-4d81-bc6b-b4581d4d3066" | message="Start processing."  
Hi all,   I have a strange issue that I cant seem to find any info on and I'm hoping someone can help me. I have a few alerts that are sent to Slack using the slack_alerts addon from Splunkbase. ... See more...
Hi all,   I have a strange issue that I cant seem to find any info on and I'm hoping someone can help me. I have a few alerts that are sent to Slack using the slack_alerts addon from Splunkbase.   Recently,  the results of these alerts are duplicated within the alert itself. (I am not receiving multiple of the same alert individually) This is the alert config: AWS Authentication Failed *Time:* $result.eventTime$ *Event:* $result.eventName$ *Account:* $result.recipientAccountId$ *User:* $result.userName$ *Source IP:* $result.sourceIPAddress$ *EventID:* $result.eventID$   And the result: AWS Authentication Failed Time: 2021-01-18T10:40:16Z 2021-01-18T10:40:16Z Event: AssumeRoleWithSAML AssumeRoleWithSAML Account: 22xxxxxxxxxx 22xxxxxxxxxx User: xxxxxxxxx Source IP: 54.89.xxxxxxx 54.89.xxxxxxxx EventID: 862b7079-e38b-4b1e-xxxxxxxxx 862b7079-e38b-4b1e-xxxxxxxx   I have tried recreating the alerts, removing and readding the Slack addon but same behaviour Hoping someone can help!
Hi All, I am looking for a dashboard panel, where user can enter their comments in one column by typing themselves and in second column current date should come automatically. On click of submit , t... See more...
Hi All, I am looking for a dashboard panel, where user can enter their comments in one column by typing themselves and in second column current date should come automatically. On click of submit , these value must save and shown to the user  as well.( may be via lookup ? ) Your help is appreciated .TIA
I am not able to import 1 user from Active Directory getting the below  ERROR [6003f9b5017f2b440f0850] admin:1678 - AdminController.listEntities: Insufficient user permissions for "/en-US/manager/... See more...
I am not able to import 1 user from Active Directory getting the below  ERROR [6003f9b5017f2b440f0850] admin:1678 - AdminController.listEntities: Insufficient user permissions for "/en-US/manager/search/admin/LDAP-groups/ITISSplunkITAzure". Returning 404. I do see the Security Group members in LDAP configuration group mapping but the user is not visible in Users 
Hi, I have a raw log with structure like this:   TIME|FROM|TO|URL|ERROR|STATUS|ALERT   Example: Wed Jan 6 15:10:01 2021|Department A|Department B|www.abc.com|0|Connected|Call Department C I wan... See more...
Hi, I have a raw log with structure like this:   TIME|FROM|TO|URL|ERROR|STATUS|ALERT   Example: Wed Jan 6 15:10:01 2021|Department A|Department B|www.abc.com|0|Connected|Call Department C I want to use rex to dissect them to their own fields like TIME, FROM, TO, etc... But the thing is every fields in the log are dynamic, meaning they don't always stay the same. I pretty new to regular expression and splunk so I don't know how to operate this. Thank you in advance.
Hello, I have data with fields kind of "Field Id", "Second Id". And I wanna merge these two fields into one, but when I make EVAL-sourcetype = "Field Id"+"Second Id" as a result I get merge fields n... See more...
Hello, I have data with fields kind of "Field Id", "Second Id". And I wanna merge these two fields into one, but when I make EVAL-sourcetype = "Field Id"+"Second Id" as a result I get merge fields name but not value. Also, I used this configuration in props.conf, but without success [.stanza.] FIELDALIAS-extract_fieldId = "Field Id" as FieldID FIELDALIAS-extract_secondId = "Second Id" as SecondID EVAL-sourcetype =FieldID+SecondID
I have table with just 2 columns 'Name' and 'Value'. one of the rows has the 'Value' as 'Fail', 'Pass' or 'NA' I have another table, which has a bunch of related data. I have 2 options for the secon... See more...
I have table with just 2 columns 'Name' and 'Value'. one of the rows has the 'Value' as 'Fail', 'Pass' or 'NA' I have another table, which has a bunch of related data. I have 2 options for the second table:- 1. I want the query for table 2 to run only when the 'Value' in the first table is 'Fail' and the table 2 to remain hidden OR 2. but I want that table to be only displayed when the 'Value' in the first table is 'Fail'
Hi Everyone , How can I show trend Indicator to the hardcoded values . I have one single value like this: | stats count as test | eval test=test+10 |table test . I am getting 10 as value. I want t... See more...
Hi Everyone , How can I show trend Indicator to the hardcoded values . I have one single value like this: | stats count as test | eval test=test+10 |table test . I am getting 10 as value. I want to show percentage indicator on it but I am not getting that option in single value. Can someone guide me on that.
ServiceNow add on with splunk does not support forced TLS. So, Is there is a way with Splunk to detect mail messages coming from a specific domain that did not use TLS?
Anybody have faced this issue? $ /opt/splunk/bin/splunk add shcluster-member Deserialization failed. Could not find expected key 'current_member_uri' Fresh SH installation, all SH, Master running ... See more...
Anybody have faced this issue? $ /opt/splunk/bin/splunk add shcluster-member Deserialization failed. Could not find expected key 'current_member_uri' Fresh SH installation, all SH, Master running same version 7.2.0.   Thanks in advance.
Hi, I have a Splunk instance on Linux and I am trying to get windows network information from a IIS web server for example Port number,Protocol and IP address. I can get it to work on a windows inst... See more...
Hi, I have a Splunk instance on Linux and I am trying to get windows network information from a IIS web server for example Port number,Protocol and IP address. I can get it to work on a windows instance just not from Linux not sure if it uses WMI just wondered if it was possible or would it need to be a powershell script?   Thanks   Joe
Hello, So I am working on a CS for Enterprise Security that  -- when run manually -- it returns results; however, when its scheduled to run it does not return anything. I've looked in the _internal... See more...
Hello, So I am working on a CS for Enterprise Security that  -- when run manually -- it returns results; however, when its scheduled to run it does not return anything. I've looked in the _internal index and found that: 0 suppressions are taking effect 0 results are returned All searches are ran successfully I've looked in the notable's index and confirmed that 0 alerts are firing. The CS is running as ADMIN within the application context of Enterprise Security. This is the search: index=cylance_protect sourcetype=threat | eval FirstFound=split('First Found'," ") | eval FirstFoundDate=mvindex(FirstFound,0) | eval FirstFoundDate_epoch=strptime(FirstFoundDate, "%m/%d/%Y") | eval currentTime=now() | eval currentTime=strftime(currentTime, "%m/%d/%Y") | eval currentTime_epoch=strptime(currentTime, "%m/%d/%Y") | eval CreatedDaysAgo=(currentTime_epoch-FirstFoundDate_epoch)/86400 | eval CreatedDaysAgo=round(CreatedDaysAgo) | search CreatedDaysAgo < 2 | table _time FirstFound CreatedDaysAgo DeviceName Tenant user action "Cylance Score" signature "Detected By" "Ever Run" "File Name" file_path file_hash   Are the eval statements causing this issue? I used the above logic to ONLY return 'new' Cylance detections within the last 1 day.